h1. Using Elastic Map-Reduce in Wukong h2. Initial Setup # Sign up for elastic map reduce and S3 at Amazon AWS. # Download the Amazon elastic-mapreduce runner: either the official version at http://elasticmapreduce.s3.amazonaws.com/elastic-mapreduce-ruby.zip or the infochimps fork (which has support for Ruby 1.9) at http://github.com/infochimps/elastic-mapreduce . # Create a bucket and path to hold your EMR logs, scripts and other ephemera. For instance you might choose 'emr.yourdomain.com' as the bucket and '/wukong' as a scoping path within that bucket. In that case you will refer to it with a path like s3://emr.yourdomain.com/wukong (see notes below about s3n:// vs. s3:// URLs). # Copy the contents of wukong/examples/emr/dot_wukong_dir to ~/.wukong # Edit emr.yaml and credentials.json, adding your keys where appropriate and following the other instructions. Start with a single-node m1.small cluster as you'll probably have some false starts beforethe flow of logging in, checking the logs, etc becomes clear. # You should now be good to launch a program. We'll give it the @--alive@ flag so that the machine sticks around if there were any issues: ./elastic_mapreduce_example.rb --run=emr --alive s3://emr.yourdomain.com/wukong/data/input s3://emr.yourdomain.com/wukong/data/output # If you visit the "AWS console":http://bit.ly/awsconsole you should now see a jobflow with two steps. The first sets up debugging for the job; the second is your hadoop task. # The "AWS console":http://bit.ly/awsconsole also has the public IP of the master node. You can log in to the machine directly:
ssh -i /path/to/your/keypair.pem hadoop@ec2-148-37-14-128.compute-1.amazonaws.comh3. Lorkbong Lorkbong (named after the staff carried by Sun Wukong) is a very very simple example Heroku app that lets you trigger showing job status or launching a new job, either by visiting a special URL or by triggering a rake task. Get its code from http://github.com/mrflip/lorkbong h3. s3n:// vs. s3:// URLs Many external tools use a URI convention to address files in S3; they typically use the 's3://' scheme, which makes a lot of sense: s3://emr.yourcompany.com/wukong/happy_job_1/logs/whatever-20100808.log Hadoop can maintain an HDFS on the Amazon S3: it uses a block structure and has optimizations for streaming, no file size limitation, and other goodness. However, only hadoop tools can interpret the contents of those blocks -- to everything else it just looks like a soup of blocks labelled block_-8675309 and so forth. Hadoop unfortunately chose the 's3://' scheme for URIs in this filesystem: s3://s3hdfs.yourcompany.com/path/to/data Hadoop is happy to read s3 native files -- 'native' as in, you can look at them with a browser and upload them an download them with any S3 tool out there. There's a 5GB limit on file size, and in some cases a performance hit (but not in our experience enough to worry about). You refer to these files with the 's3n://' scheme ('n' as in 'native'): s3n://emr.yourcompany.com/wukong/happy_job_1/code/happy_job_1-mapper.rb s3n://emr.yourcompany.com/wukong/happy_job_1/code/happy_job_1-reducer.rb s3n://emr.yourcompany.com/wukong/happy_job_1/logs/whatever-20100808.log Wukong will coerce things to the right scheme when it knows what that scheme should be (eg. code should be s3n://). It will otherwise leave the path alone. Specifically, if you use a URI scheme for input and output paths you must use 's3n://' for normal s3 files. h2. Advanced Tips n' Tricks for common usage h3. Direct access to logs using your browser Each Hadoop component exposes a web dashboard for you to access. Use the following ports: * 9100: Job tracker (master only) * 9101: Namenode (master only) * 9102: Datanodes * 9103: Task trackers They will only, however, respond to web requests from within the private cluster subnet. You can browse the cluster by creating a persistent tunnel to the hadoop master node, and configuring your browser to use it as a proxy. h4. Create a tunneling proxy to your cluster To create a tunnel from your local machine to the master node, substitute the keypair and the master node's address into this command:
ssh -i ~/.wukong/keypairs/KEYPAIR.pem -f -N -D 6666 -o StrictHostKeyChecking=no -o "ConnectTimeout=10" -o "ServerAliveInterval=60" -o "ControlPath=none" ubuntu@MASTER_NODE_PUBLIC_IP
The command will silently background itself if it worked.
h4. Make your browser use the proxy (but only for cluster machines)
You can access basic information by pointing your browser to "this Proxy
Auto-Configuration (PAC)
file.":http://github.com/infochimps/cluster_chef/raw/master/config/proxy.pac
You'll have issues if you browse around though, because many of the in-page
links will refer to addresses that only resolve within the cluster's private
namespace.
h4. Setup Foxy Proxy
To fix this, use "FoxyProxy":https://addons.mozilla.org/en-US/firefox/addon/2464
It allows you to manage multiple proxy configurations and to use the proxy for
DNS resolution (curing the private address problem).
Once you've installed the FoxyProxy extension and restarted Firefox,
* Set FoxyProxy to 'Use Proxies based on their pre-defined patterns and priorities'
* Create a new proxy, called 'EC2 Socks Proxy' or something
* Automatic proxy configuration URL: http://github.com/infochimps/cluster_chef/raw/master/config/proxy.pac
* Under 'General', check yes for 'Perform remote DNS lookups on host'
* Add the following URL patterns as 'whitelist' using 'Wildcards' (not regular expression):
* *.compute-*.internal*
* *ec2.internal*
* *domu*.internal*
* *ec2*.amazonaws.com*
* *://10.*
And this one as blacklist:
* https://us-*st-1.ec2.amazonaws.com/*
h3. Pulling to your local machine
s3cmd sync s3://s3n.infinitemonkeys.info/emr/elastic_mapreduce_example/log/ /tmp/emr_log/