README.md in dev-lxc-0.3.0 vs README.md in dev-lxc-0.3.1
- old
+ new
@@ -14,173 +14,209 @@
2. Btrfs - Efficient storage backend provides fast, lightweight container cloning
3. Dnsmasq - DHCP networking and DNS resolution
4. Base containers - Containers that are built to resemble a traditional server
5. ruby-lxc - Ruby bindings for liblxc
6. YAML - Simple, customizable definition of clusters; No more setting ENV variables
-7. Build process closely models online installation documentation
+7. Build process closely follows online installation documentation
Its containers, standard init, networking and build process are designed to be similar
to what you would build if you follow the online installation documentation so the end
result is a cluster that is relatively similar to a more traditionally built cluster.
The Btrfs backed clones provide a quick clean slate which is helpful especially for
experimenting and troubleshooting. Or it can be used to build a customized cluster
for demo purposes and be able to bring it up quickly and reliably.
-While most of the plumbing is already in place for an HA cluster it actually can't be
-used since I haven't been able to get DRBD working inside containers yet.
-
If you aren't familiar with using containers please read this introduction.
[LXC 1.0 Introduction](https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/)
## Requirements
-The `dev-lxc` tool is designed to be used in a platform built by the
-[dev-lxc-platform](https://github.com/jeremiahsnapp/dev-lxc-platform) cookbook.
+* dev-lxc-platform
-Please follow the dev-lxc-platform usage instructions to create a suitable platform.
+ The `dev-lxc` tool is designed to be used in a platform built by
+ [dev-lxc-platform](https://github.com/jeremiahsnapp/dev-lxc-platform).
-The cookbook will automatically install this `dev-lxc` tool.
+ Please follow the dev-lxc-platform usage instructions to create a suitable platform.
-### Use root
+ The dev-lxc-platform will automatically install this `dev-lxc` tool.
-Once you login to the Vagrant VM you should run `sudo -i` to login as the root user.
+* Use root user
-Consider using `byobu` or `tmux` for a terminal multiplexer as `dev-lxc-platform` README
-describes.
+ Once you login to the Vagrant VM platform you should run `sudo -i` to login as the root user.
-### Mounts and Packages (batteries not included)
+ Consider using `byobu` or `tmux` for a terminal multiplexer as [dev-lxc-platform README
+ describes](https://github.com/jeremiahsnapp/dev-lxc-platform#use-a-terminal-multiplexer).
-As described below `dev-lxc` uses a YAML config file for each cluster.
+* Setup Mounts and Packages
-This config file describes what directories get mounted from the Vagrant VM host into
-each container. You need to make sure that you configure the mount entries to be
-appropriate for your environment.
+ As [described below](https://github.com/jeremiahsnapp/dev-lxc#cluster-config-files)
+ `dev-lxc` uses a `dev-lxc.yml` config file for each cluster.
+ Be sure that you configure the `mounts` and `packages` lists in `dev-lxc.yml` to match your
+ particular environment.
-The same goes for the paths to each Chef package. The paths that are provided in the default
-configs are just examples. You need to make sure that you have each package you want to
-use downloaded to appropriate directories that will be available to the container when
-it is started.
+## Update dev-lxc gem
-I recommend downloading the packages to a directory on your workstation.
-Then configure the `dev-lxc-platform` `Vagrantfile` to mount that directory in the
-Vagrant VM. Finally, configure the cluster's mount entries to mount the Vagrant
-VM directory into each container.
+Run `gem update dev-lxc` inside the Vagrant VM platform to ensure you have the latest version.
-## Update `dev-lxc` gem
+## Usage
-Run `gem update dev-lxc` inside the Vagrant VM to ensure you have the latest version.
+### Shorter Commands are Faster (to type that is :)
-## Background
+The dev-lxc-platform's root user's `~/.bashrc` file has aliased `dl` to `dev-lxc` for ease of use but
+for most instructions in this README I will use `dev-lxc` for clarity.
-### Base Containers
+You only have to type enough of a `dev-lxc` subcommand to make it unique.
-One of the key things this tool uses is the concept of "base" containers.
+The following commands are equivalent:
-`dev-lxc` creates base containers with a "p-", "s-" or "u-" prefix on the name to distinguish it as
-a "platform", "shared" or "unique" base container.
+```
+dev-lxc cluster init standalone > dev-lxc.yml
+dl cl i standalone > dev-lxc.yml
+```
-Base containers are then snapshot cloned using the btrfs filesystem to very quickly
-provide a lightweight duplicate of the base container. This clone is either used to build
-another base container or a container that will actually be run.
+```
+dev-lxc cluster start
+dl cl start
+```
-During a cluster build process the base containers that get created fall into three categories.
+```
+dev-lxc cluster status
+dl cl stat
+```
-1. Platform
+```
+dev-lxc cluster destroy
+dl cl d
+```
- The platform base container is the first to get created and is identified by the
- "p-" prefix on the container name.
+### Create and Manage a Cluster
- `DevLXC#create_base_platform` controls the creation of a platform base container.
+The following instructions will use a tier cluster for demonstration purposes.
+The size of this cluster uses about 3GB ram and takes a long time for the first
+build of the servers. Feel free to try the standalone config first.
- This container provides the chosen OS platform and version (e.g. p-ubuntu-1404).
- A typical LXC container has minimal packages installed so `dev-lxc` makes sure that the
- same packages used in Chef's [bento boxes](https://github.com/opscode/bento) are
- installed to provide a more typical server environment.
- A few additional packages are also installed.
+#### Define cluster
- *Once this platform base container is created there is rarely a need to delete it.*
+The following command saves a predefined config to dev-lxc.yml.
-2. Shared
+Be sure you configure the
+[mounts and packages entries](https://github.com/jeremiahsnapp/dev-lxc#cluster-config-files)
+appropriately.
- The shared base container is the second to get created and is identified by the
- "s-" prefix on the container name.
+ dev-lxc cluster init tier > dev-lxc.yml
- `DevLXC::ChefServer#create_base_server` controls the creation of a shared base container.
+#### Start cluster
- Chef packages that are common to all servers in a Chef cluster, such as Chef server,
- opscode-reporting and opscode-push-jobs-server are installed using `dpkg` or `rpm`.
+Starting the cluster the first time takes awhile since it has a lot to build.
- Note the manage package will not be installed at this point since it is not common to all
- servers (i.e. it does not get installed on backend servers).
+The tool automatically creates snapshot clones at appropriate times so future
+creation of the cluster's servers is very quick.
- The name of this base container is built from the names and versions of the Chef packages that
- get installed which makes this base container easy to be reused by another cluster that is
- configured to use the same Chef packages.
+ dev-lxc cluster start
- *Since no configuration actually happens yet there is rarely a need to delete this container.*
+A test org, user, knife.rb and keys are automatically created in
+the bootstrap backend server in `/root/chef-repo/.chef` for testing purposes.
-3. Unique
+The `knife-opc` plugin is installed in the embedded ruby environment of the
+Private Chef and Enterprise Chef server to facilitate the creation of the test
+org and user.
- The unique base container is the last to get created and is identified by the
- "u-" prefix on the container name.
+#### Cluster status
- `DevLXC::ChefServer#create` controls the creation of a unique base container.
+Run the following command to see the status of the cluster.
- Each unique Chef server (e.g. standalone, backend or frontend) is created.
+ dev-lxc cluster status
- * The specified hostname is assigned.
- * dnsmasq is configured to reserve the specified IP address for the container's MAC address.
- * A DNS entry is created in dnsmasq if appropriate.
- * All installed Chef packages are configured.
- * Test users and orgs are created.
- * The opscode-manage package is installed and configured if specified.
+This is an example of the output.
- After each server is fully configured a snapshot clone of it is made resulting in the server's
- unique base container. These unique base containers make it very easy to quickly recreate
- a Chef cluster from a clean starting point.
+ Cluster is available at https://chef-tier.lxc
+ be-tier.lxc running 10.0.3.202
+ fe1-tier.lxc running 10.0.3.203
-#### Destroying Base Containers
+[https://chef-tier.lxc](https://chef-tier.lxc) resolves to the frontend.
-When using `dev-lxc cluster destroy` to destroy an entire Chef cluster or `dev-lxc server destroy [NAME]`
-to destroy a single Chef server you have the option to also destroy any or all of the three types
-of base containers associated with the cluster or server.
+#### Create chef-repo
-Either of the following commands will list the options available.
+Create a local chef-repo with appropriate knife.rb and pem files.
- dev-lxc cluster help destroy
+ dev-lxc cluster chef-repo
- dev-lxc server help destroy
+Now you can easily use knife to access the cluster.
-Of course, you can also just use the standard LXC commands to destroy any container.
+ cd chef-repo
+ knife ssl fetch
+ knife client list
- lxc-destroy -n [NAME]
+#### Cheap cluster rebuilds
-#### Manually Create a Platform Base Container
+Clones of the servers as they existed immediately after initial installation, configuration and
+test org and user creation are available so you can destroy the cluster and "rebuild" it within
+seconds effectively starting with a clean slate very easily.
-Platform base containers can be used for purposes other than building clusters. For example, they can
-be used as Chef nodes for testing purposes.
+ dev-lxc cluster destroy
+ dev-lxc cluster start
-You can see a menu of platform base containers this tool can create by using the following command.
+#### Stop and start the cluster
- dev-lxc create
+ dev-lxc cluster stop
+ dev-lxc cluster start
-The initial creation of platform base containers can take awhile so let's go ahead and start creating
-an Ubuntu 12.04 base container now.
+#### Backdoor access to each server's filesystem
- dev-lxc create p-ubuntu-1404
+The abspath subcommand can be used to prepend each server's rootfs path to a particular file.
-### Cluster Config Files
+For example, you can use the following command to edit each server's chef-server.rb file without
+logging into the containers.
-dev-lxc uses a yaml configuration file to define a cluster.
+ emacs $(dev-lxc cluster abspath /etc/opscode/chef-server.rb)
+#### Run arbitrary commands in each server
+
+After modifying the chef-server.rb you could use the run_command subcommand to tell each server
+to run `chef-server-ctl reconfigure`.
+
+ dev-lxc cluster run_command 'chef-server-ctl reconfigure'
+
+#### Destroy cluster
+
+Use the following command to destroy the cluster's servers and also destroy their unique and shared
+base containers if you want to build them from scratch.
+
+ dev-lxc cluster destroy -u -s
+
+#### Use commands against a specific server
+You can also run most of these commands against individual servers by using the server subcommand.
+
+ dev-lxc server ...
+
+### Using the dev-lxc library
+
+dev-lxc can also be used as a library.
+
+ require 'yaml'
+ require 'dev-lxc'
+ cluster = DevLXC::ChefCluster.new(YAML.load(IO.read('dev-lxc.yml')))
+ cluster.start
+ cluster.status
+ cluster.run_command("uptime")
+ server = DevLXC::ChefServer.new("fe1-tier.lxc", YAML.load(IO.read('dev-lxc.yml')))
+ server.stop
+ server.start
+ server.run_command("chef-server-ctl reconfigure")
+ cluster.destroy
+
+## Cluster Config Files
+
+dev-lxc uses a YAML configuration file named `dev-lxc.yml` to define a cluster.
+
The following command generates sample config files for various cluster topologies.
dev-lxc cluster init
-`dev-lxc cluster init tier` generates the following file:
+`dev-lxc cluster init tier > dev-lxc.yml` creates a `dev-lxc.yml` file with the following content:
platform_container: p-ubuntu-1404
topology: tier
api_fqdn: chef-tier.lxc
mounts:
@@ -201,39 +237,55 @@
# fe2-tier.lxc:
# role: frontend
# ipaddress: 10.0.3.204
This config defines a tier cluster consisting of a single backend and a single frontend.
-A second frontend is commented out to conserve resources.
-If you uncomment the second frontend then both frontends will be created and dnsmasq will
-resolve the `api_fqdn` [chef-tier.lxc](chef-tier.lxc) to both frontends using a round-robin policy.
+A second frontend is commented out to conserve resources. If you uncomment the second
+frontend then both frontends will be created and dnsmasq will resolve the `api_fqdn`
+[chef-tier.lxc](chef-tier.lxc) to both frontends using a round-robin policy.
The config file is very customizable. You can add or remove mounts, packages or servers,
change ip addresses, change server names, change the base_platform and more.
-Make sure the mounts and packages represent paths that are available in your environment.
+The `mounts` list describes what directories get mounted from the Vagrant VM platform into
+each container. You need to make sure that you configure the mount entries to be
+appropriate for your environment.
+The same is true for the `packages` list. The paths that are provided in the default configs are just examples.
+You need to make sure that you have each package you want to use downloaded to appropriate directories
+that will be available to the container when it is started.
+
+I recommend downloading the packages to a directory on your workstation.
+Then configure the
+[dev-lxc-platform's .kitchen.yml](https://github.com/jeremiahsnapp/dev-lxc-platform#description)
+to mount that directory in the Vagrant VM platform.
+Then configure the cluster's mount entries in `dev-lxc.yml` to mount the Vagrant VM platform's
+directory into each container.
+
+Make sure the mounts and packages represent actual paths that are available in your environment.
+
### Managing Multiple Clusters
-By default, `dev-lxc` looks for a `dev-lxc.yaml` file in the present working directory.
+By default, `dev-lxc` looks for a `dev-lxc.yml` file in the present working directory.
You can also specify a particular config file as an option for most dev-lxc commands.
-I use the following strategy to avoid specifying each cluster's config file while managing multiple clusters.
+The following is an example of managing multiple clusters while still avoiding specifying
+each cluster's config file.
mkdir -p ~/clusters/{clusterA,clusterB}
- dev-lxc cluster init tier > ~/clusters/clusterA/dev-lxc.yaml
- dev-lxc cluster init standalone > ~/clusters/clusterB/dev-lxc.yaml
+ dev-lxc cluster init tier > ~/clusters/clusterA/dev-lxc.yml
+ dev-lxc cluster init standalone > ~/clusters/clusterB/dev-lxc.yml
cd ~/clusters/clusterA && dev-lxc cluster start # starts clusterA
cd ~/clusters/clusterB && dev-lxc cluster start # starts clusterB
### Maintain Uniqueness Across Multiple Clusters
The default cluster configs are already designed to be unique from each other but as you build
more clusters you have to maintain uniqueness across the YAML config files for the following items.
-1. Server names and `api_fqdn`
+* Server names and `api_fqdn`
Server names should really be unique across all clusters.
Even when cluster A is shutdown, if cluster B uses the same server names when it is created it
will use the already existing servers from cluster A.
@@ -244,13 +296,13 @@
will overwrite cluster A's DNS resolution of `api_fqdn`.
It is easy to provide uniqueness. For example, you can use the following command to replace `-tier`
with `-1234` in a tier cluster's config.
- sed -i 's/-tier/-1234/' dev-lxc.yaml
+ sed -i 's/-tier/-1234/' dev-lxc.yml
-2. IP Addresses
+* IP Addresses
IP addresses uniqueness only matters when clusters with the same IP's are running.
If cluster B is started with the same IP's as an already running cluster A, then cluster B
will overwrite cluster A's DHCP reservation of the IP's but dnsmasq will still refuse to
@@ -259,110 +311,105 @@
The `dev-lxc-platform` creates the IP range 10.0.3.150 - 254 for DHCP reserved IP's.
Use unique IP's from that range when configuring clusters.
-## Usage
+## Base Containers
-### Shorter Commands are Faster (to type that is :)
+One of the key things this tool uses is the concept of "base" containers.
-The root user's `~/.bashrc` file has aliased `dl` to `dev-lxc` for ease of use but for most
-instructions in this README I will use `dev-lxc`.
+`dev-lxc` creates base containers with a "p-", "s-" or "u-" prefix on the name to distinguish it as
+a "platform", "shared" or "unique" base container.
-You only have to type enough of a `dev-lxc` subcommand to make it unique.
+Base containers are then snapshot cloned using the btrfs filesystem to very quickly
+provide a lightweight duplicate of the base container. This clone is either used to build
+another base container or a container that will actually be run.
-The following commands are equivalent:
+During a cluster build process the base containers that get created fall into three categories.
- dev-lxc cluster init standalone
- dl cl i standalone
+1. Platform
- dev-lxc cluster start
- dl cl start
+ The platform base container is the first to get created and is identified by the
+ "p-" prefix on the container name.
- dev-lxc cluster destroy
- dl cl d
+ `DevLXC#create_base_platform` controls the creation of a platform base container.
-### Create and Manage a Cluster
+ This container provides the chosen OS platform and version (e.g. p-ubuntu-1404).
+ A typical LXC container has minimal packages installed so `dev-lxc` makes sure that the
+ same packages used in Chef's [bento boxes](https://github.com/opscode/bento) are
+ installed to provide a more typical server environment.
+ A few additional packages are also installed.
-The following instructions will use a tier cluster for demonstration purposes.
-The size of this cluster uses about 3GB ram and takes a long time for the first
-build of the servers. Feel free to try the standalone config first.
+ *Once this platform base container is created there is rarely a need to delete it.*
-The following command saves a predefined config to dev-lxc.yaml.
+2. Shared
- dev-lxc cluster init tier > dev-lxc.yaml
+ The shared base container is the second to get created and is identified by the
+ "s-" prefix on the container name.
-Starting the cluster the first time takes awhile since it has a lot to build.
+ `DevLXC::ChefServer#create_base_server` controls the creation of a shared base container.
-The tool automatically creates snapshot clones at appropriate times so future
-creation of the cluster's servers is very quick.
+ Chef packages that are common to all servers in a Chef cluster, such as Chef server,
+ opscode-reporting and opscode-push-jobs-server are installed using `dpkg` or `rpm`.
- dev-lxc cluster start
+ Note the manage package will not be installed at this point since it is not common to all
+ servers (i.e. it does not get installed on backend servers).
-[https://chef-tier.lxc](https://chef-tier.lxc) resolves to the frontend.
+ The name of this base container is built from the names and versions of the Chef packages that
+ get installed which makes this base container easy to be reused by another cluster that is
+ configured to use the same Chef packages.
-A test org and user and knife.rb and keys are automatically created in
-the bootstrap backend server in /root/chef-repo/.chef for testing purposes.
-The `knife-opc` plugin is installed in the embedded ruby environment of the
-Private Chef and Enterprise Chef server to facilitate the creation of the test
-org and user.
+ *Since no configuration actually happens yet there is rarely a need to delete this container.*
-Show the status of the cluster.
+3. Unique
- dev-cluster status
+ The unique base container is the last to get created and is identified by the
+ "u-" prefix on the container name.
-Create a local chef-repo with appropriate knife.rb and pem files.
-This makes it easy to use knife.
+ `DevLXC::ChefServer#create` controls the creation of a unique base container.
- dev-lxc cluster chef-repo
- cd chef-repo
- knife ssl fetch
- knife client list
+ Each unique Chef server (e.g. standalone, backend or frontend) is created.
-Stop the cluster's servers.
+ * The specified hostname is assigned.
+ * dnsmasq is configured to reserve the specified IP address for the container's MAC address.
+ * A DNS entry is created in dnsmasq if appropriate.
+ * All installed Chef packages are configured.
+ * Test users and orgs are created.
+ * The opscode-manage package is installed and configured if specified.
- dev-lxc cluster stop
+ After each server is fully configured a snapshot clone of it is made resulting in the server's
+ unique base container. These unique base containers make it very easy to quickly recreate
+ a Chef cluster from a clean starting point.
-Clones of the servers as they existed immediately after initial installation and configuration
-are available so you can destroy the cluster and "rebuild" it within seconds effectively starting
-with a clean slate.
+### Destroying Base Containers
- dev-lxc cluster destroy
- dev-lxc cluster start
+When using `dev-lxc cluster destroy` to destroy an entire Chef cluster or `dev-lxc server destroy [NAME]`
+to destroy a single Chef server you have the option to also destroy any or all of the three types
+of base containers associated with the cluster or server.
-The abspath subcommand can be used to prepend each server's rootfs path to a particular file.
+Either of the following commands will list the options available.
-For example, to edit each server's private-chef.rb file you can use the following command.
+ dev-lxc cluster help destroy
- emacs $(dev-lxc cluster abspath /etc/opscode/private-chef.rb)
+ dev-lxc server help destroy
-After modifying the private-chef.rb you could use the run_command subcommand to tell each server
-to run `private-chef-ctl reconfigure`.
+Of course, you can also just use the standard LXC commands to destroy any container.
- dev-lxc cluster run_command 'private-chef-ctl reconfigure'
+ lxc-destroy -n [NAME]
-Use the following command to destroy the cluster's servers and also destroy their unique and shared
-base containers so you can build them from scratch.
+### Manually Create a Platform Base Container
- dev-lxc cluster destroy -u -s
+Platform base containers can be used for purposes other than building clusters. For example, they can
+be used as Chef nodes for testing purposes.
-You can also run most of these commands against individual servers by using the server subcommand.
+You can see a menu of platform base containers this tool can create by using the following command.
- dev-lxc server ...
+ dev-lxc create
-### Using the dev-lxc library
+The initial creation of platform base containers can take awhile so let's go ahead and start creating
+an Ubuntu 12.04 base container now.
-dev-lxc can also be used as a library if preferred.
-
- irb(main):001:0> require 'yaml'
- irb(main):002:0> require 'dev-lxc'
- irb(main):003:0> cluster = DevLXC::ChefCluster.new(YAML.load(IO.read('dev-lxc.yaml')))
- irb(main):004:0> cluster.start
- irb(main):005:0> server = DevLXC::ChefServer.new("fe1-tier.lxc", YAML.load(IO.read('dev-lxc.yaml')))
- irb(main):006:0> server.stop
- irb(main):007:0> server.start
- irb(main):008:0> server.run_command("private-chef-ctl reconfigure")
- irb(main):009:0> cluster.destroy
+ dev-lxc create p-ubuntu-1404
## Contributing
1. Fork it
2. Create your feature branch (`git checkout -b my-new-feature`)