README.md in dev-lxc-0.6.4 vs README.md in dev-lxc-1.0.0
- old
+ new
@@ -1,17 +1,20 @@
# dev-lxc
-A tool for creating Chef Server clusters with a Chef Analytics server using LXC containers.
+A tool for creating Chef Server clusters and Chef Analytics clusters using LXC containers.
Using [ruby-lxc](https://github.com/lxc/ruby-lxc) it builds a standalone Chef Server or
tier Chef Server cluster composed of a backend and multiple frontends with round-robin
-DNS resolution. It will also optionally build a standalone or tier Chef Analytics server
-and connect it with the Chef Server.
+DNS resolution. It can also build a standalone or tier Chef Analytics server and connect
+it with the Chef Server.
-The dev-lxc tool is well suited as a tool for support related work, customized
-cluster builds for demo purposes, as well as general experimentation and exploration.
+dev-lxc also has commands to manipulate Chef node containers. For example, dev-lxc can bootstrap a
+container by installing Chef Client, configuring it for a Chef Server and running a specified run_list.
+The dev-lxc tool is well suited as a tool for support related work, customized cluster builds
+for demo purposes, as well as general experimentation and exploration of Chef products
+
### Features
1. LXC 1.0 Containers - Resource efficient servers with fast start/stop times and standard init
2. Btrfs - Efficient storage backend provides fast, lightweight container cloning
3. Dnsmasq - DHCP networking and DNS resolution
@@ -57,10 +60,27 @@
As [described below](https://github.com/jeremiahsnapp/dev-lxc#cluster-config-files)
`dev-lxc` uses a `dev-lxc.yml` config file for each cluster.
Be sure that you configure the `mounts` and `packages` lists in `dev-lxc.yml` to match your
particular environment.
+ The package paths in dev-lxc's example configs assume that the packages are stored in the
+ following directory structure in the dev-lxc-platform VM. I recommend creating that
+ directory structure in the physical workstation and configuring dev-lxc-platform's `.knife.yml`
+ to mount the structure into `/dev-shared` in the dev-lxc-platform VM.
+
+```
+/dev-shared/chef-packages/
+├── analytics
+├── cs
+├── ec
+├── manage
+├── osc
+├── push-jobs-server
+├── reporting
+└── sync
+```
+
## Update dev-lxc gem
Run `gem update dev-lxc` inside the Vagrant VM platform to ensure you have the latest version.
## Usage
@@ -128,31 +148,13 @@
#### List Images
List of each servers' images created during the build process.
```
-dev-lxc list_images
+dev-lxc list-images
```
-#### Start cluster
-
-Starting the cluster the first time takes awhile since it has a lot to build.
-
-The tool automatically creates images at appropriate times so future creation of the
-cluster's servers is very quick.
-
-```
-dev-lxc up
-```
-
-A test org, user, knife.rb and keys are automatically created in
-the bootstrap backend server in `/root/chef-repo/.chef` for testing purposes.
-
-The `knife-opc` plugin is installed in the embedded ruby environment of the
-Private Chef and Enterprise Chef server to facilitate the creation of the test
-org and user.
-
#### Cluster status
Run the following command to see the status of the cluster.
```
@@ -170,17 +172,74 @@
chef-fe1.lxc running 10.0.3.204
analytics-be.lxc running 10.0.3.206
analytics-fe1.lxc running 10.0.3.207
```
+#### cluster-view, tks, tls commands
+
+The dev-lxc-platform comes with some commands that create and manage helpful
+tmux/byobu sessions to more easily see the state of a cluster.
+
+Running the `cluster-view` command in the same directory as a `dev-lxc.yml` file
+creates a tmux/byobu session with the same name as the cluster's directory.
+`cluster-view` can also be run with the parent directory of a `dev-lxc.yml` file
+as the first argument and `cluster-view` will change to that directory before
+creating the tmux/byobu session.
+
+The session's first window is named "cluster".
+
+The left side is for running dev-lxc commands.
+
+The right side is made up of three vertically stacked panes with each pane's content
+updating every 0.5 seconds.
+
+* Top - system's memory usage provided by `free -h`
+* Middle - cluster's status provided by `dev-lxc status`
+* Bottom - list of the cluster's images provided by `dev-lxc list-images`
+
+The session's second window is named "shell". It opens in the same directory as the
+cluster's `dev-lxc.yml` file.
+
+The `tls` and `tks` commands are really aliases.
+
+`tls` is an alias for `tmux list-sessions` and is used to see what tmux/byobu sessions
+are running.
+
+`tks` is an alias for `tmux kill-session -t` and is used to kill tmux/byobu sessions.
+When specifying the session to be killed you only need as many characters of the session
+name that are required to make the name unique among the list of running sessions.
+
+I recommend switching to a different running tmux/byobu session before killing the current
+tmux/byobu session. Otherwise you will need to reattach to the remaining tmux/byobu session.
+Use the keyboard shortcuts Alt-Up/Down to easily switch between tmux/byobu sessions.
+
+#### Start cluster
+
+Starting the cluster the first time takes awhile since it has a lot to build.
+
+The tool automatically creates images at appropriate times so future creation of the
+cluster's servers is very quick.
+
+```
+dev-lxc up
+```
+
+A test org, user, knife.rb and keys are automatically created in
+the bootstrap backend server in `/root/chef-repo/.chef` for testing purposes.
+
+The `knife-opc` plugin is installed in the embedded ruby environment of the
+Private Chef and Enterprise Chef server to facilitate the creation of the test
+org and user.
+
#### Create chef-repo
Create a local chef-repo with appropriate knife.rb and pem files.
-Also create a `./bootstrap-node` script to simplify creating and
-bootstrapping nodes in the `dev-lxc-platform`.
+Use the `-p` option to also get pivotal.pem and pivotal.rb files.
+Use the `-f` option to overwrite existing knife.rb and pivotal.rb files.
+
```
dev-lxc chef-repo
```
Now you can easily use knife to access the cluster.
@@ -220,17 +279,25 @@
emacs $(dev-lxc abspath chef /etc/opscode/chef-server.rb)
```
#### Run arbitrary commands in each server
-After modifying the chef-server.rb you could use the run_command subcommand to tell the backend and
+After modifying the chef-server.rb you could use the run-command subcommand to tell the backend and
frontend servers to run `chef-server-ctl reconfigure`.
```
-dev-lxc run_command chef 'chef-server-ctl reconfigure'
+dev-lxc run-command chef 'chef-server-ctl reconfigure'
```
+#### Attach the terminal to a server
+
+Attach the terminal to a server in the cluster that matches the REGEX pattern given.
+
+```
+dev-lxc attach chef-be
+```
+
#### Make a snapshot of the servers
Save the changes in the servers to custom images.
```
@@ -252,10 +319,19 @@
```
dev-lxc destroy -c -u -s
```
+#### Global status of all dev-lxc images and servers
+
+Use the `global-status` command to see the status of all dev-lxc images and servers stored in dev-lxc's
+default LXC config_path `/var/lib/dev-lxc`.
+
+```
+dev-lxc global-status
+```
+
#### Use commands against specific servers
You can also run most of these commands against a set of servers by specifying a regular expression
that matches a set of server names.
```
@@ -264,13 +340,79 @@
For example, to only start the Chef Servers named `chef-be.lxc` and `chef-fe1.lxc`
you can run the following command.
```
-dev-lxc up 'chef'
+dev-lxc up chef
```
+### Managing Node Containers
+
+#### Manually Create a Platform Image
+
+Platform images can be used for purposes other than building clusters. For example, they can
+be used as Chef nodes for testing purposes.
+
+You can see a menu of platform images this tool can create by using the following command.
+
+```
+dev-lxc create
+```
+
+The initial creation of platform images can take awhile so let's go ahead and start creating
+an Ubuntu 14.04 image now.
+
+```
+dev-lxc create p-ubuntu-1404
+```
+
+#### Install Chef Client in a Container
+
+Use the `-v` option to specify a particular version of Chef Client.
+
+Use `-v latest` or leave out the `-v` option to install the latest version of Chef Client.
+
+For example, install the latest 11.x version of Chef Client.
+
+```
+dev-lxc install-chef-client test-node.lxc -v 11
+```
+
+#### Configure Chef Client in a Container
+
+Use the `-s`, `-u`, `-k` options to set `chef_server_url`, `validation_client_name` and
+`validation_key` in a container's `/etc/chef/client.rb` and copy the validator's key to
+`/etc/chef/validation.pem`.
+
+Or leave the options empty and it will default to using values from the cluster defined
+in `dev-lxc.yml`.
+
+```
+dev-lxc config-chef-client test-node.lxc
+```
+
+#### Bootstrap Chef Client in a Container
+
+Specifying a `BASE_CONTAINER_NAME` will clone the base container into a new container
+and bootstrap it. If no `BASE_CONTAINER_NAME` is given then the container to be bootstrapped
+needs to already exist.
+
+Use the `-v` option to specify a particular version of Chef Client.
+
+Use the `-s`, `-u`, `-k` options to set `chef_server_url`, `validation_client_name` and
+`validation_key` in a container's `/etc/chef/client.rb` and copy the validator's key to
+`/etc/chef/validation.pem`.
+
+Or leave the options empty and it will default to using values from the cluster defined
+in `dev-lxc.yml`.
+
+Use the `-r` option to specify the run_list for chef-client to use.
+
+```
+dev-lxc bootstrap-container test-node.lxc -r my_run_list
+```
+
### Using the dev-lxc library
dev-lxc cli interface can be used as a library.
```
@@ -280,11 +422,11 @@
DevLXC::CLI::DevLXC.start
ARGV = [ 'status' ] # show status of all servers
DevLXC::CLI::DevLXC.start
-ARGV = [ 'run_command', 'uptime' ] # run `uptime` in all servers
+ARGV = [ 'run-command', 'uptime' ] # run `uptime` in all servers
DevLXC::CLI::DevLXC.start
ARGV = [ 'destroy' ] # destroy all servers
DevLXC::CLI::DevLXC.start
```
@@ -300,10 +442,11 @@
server.start # start chef-fe1.lxc
server.status # show status of chef-fe1.lxc
server.run_command("chef-server-ctl reconfigure") # run command in chef-fe1.lxc
server.stop # stop chef-fe1.lxc
+server.destroy # destroy chef-fe1.lxc
```
## Cluster Config Files
dev-lxc uses a YAML configuration file named `dev-lxc.yml` to define a cluster.
@@ -469,10 +612,16 @@
Images are then cloned using the btrfs filesystem to very quickly provide a lightweight duplicate
of the image. This clone is either used to build the next image in the build process or the final
container that will actually be run.
+By default, the cluster's images and final server containers are all stored in `/var/lib/dev-lxc`
+so they don't clutter the containers stored in the default LXC config_path `/var/lib/lxc`.
+
+The cluster's LXC config_path can be configured by setting `lxc_config_path` at the top of the
+`dev-lxc.yml` file to the desired directory.
+
There are four image categories.
1. Platform
The platform image is the first to get created and is identified by the
@@ -551,27 +700,9 @@
Of course, you can also just use the standard LXC commands to destroy any container.
```
lxc-destroy -n [NAME]
-```
-
-### Manually Create a Platform Image
-
-Platform images can be used for purposes other than building clusters. For example, they can
-be used as Chef nodes for testing purposes.
-
-You can see a menu of platform images this tool can create by using the following command.
-
-```
-dev-lxc create
-```
-
-The initial creation of platform images can take awhile so let's go ahead and start creating
-an Ubuntu 14.04 image now.
-
-```
-dev-lxc create p-ubuntu-1404
```
## Contributing
1. Fork it