cd ~/dev/esWithChefSolo/
curl -# -L -k https://gist.github.com/gists/2821820/download | tar xz --strip 1 -C .
- replace data in run-1.json
1. Check and make sure that its still a proper json file after your edits ... unless sending your creds in json over the web ... gives you pause?! ... or maybe this site uses javascript to do it ... who knows ;)
- http://jsonformatter.curiousconcept.com/ 2. After the 1st time, I jsut kept a backup file that I could reuse rather than editing after every download
cp run-1.json.backup run-1.json
export HOST=XXX.XXX.XXX.XXX
export SSH_OPTIONS="-o User=ec2-user -o IdentityFile=~/.ec2/ec2.pem -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
scp $SSH_OPTIONS ./bootstrap.sh ./patches.sh ./run-*.json ./solo.rb $HOST:/tmp
time ssh -t $SSH_OPTIONS $HOST "sudo bash /tmp/bootstrap.sh"
time ssh -t $SSH_OPTIONS $HOST "sudo bash /tmp/patches.sh"
time ssh -t $SSH_OPTIONS $HOST "sudo chef-solo --node-name elasticsearch-test-1 -j /tmp/run-1.json"
1. Check server status with either command and wait until its ready:ssh -t $SSH_OPTIONS $HOST "curl localhost:9200"
ssh -t $SSH_OPTIONS $HOST "sudo service elasticsearch status -v"
time ssh -t $SSH_OPTIONS $HOST "sudo chef-solo --node-name elasticsearch-test-1 -j /tmp/run-2.json"
1. Check server status with either command and wait until its ready:ssh -t $SSH_OPTIONS $HOST "curl localhost:9200"
ssh -t $SSH_OPTIONS $HOST "sudo service elasticsearch status -v"
time ssh -t $SSH_OPTIONS $HOST "sudo chef-solo --node-name elasticsearch-test-1 -j /tmp/run-3.json"
1. Check server status with either command and wait until its ready:ssh -t $SSH_OPTIONS $HOST "curl localhost:9200"
ssh -t $SSH_OPTIONS $HOST "sudo service elasticsearch status -v"
time ssh -t $SSH_OPTIONS $HOST "sudo chef-solo --node-name elasticsearch-test-1 -j /tmp/run-4.json"
1. Check server status with either command and wait until its ready:ssh -t $SSH_OPTIONS $HOST "curl localhost:9200"
ssh -t $SSH_OPTIONS $HOST "sudo service elasticsearch status -v"
- You can test if evrything is working via the secure port using the head plugin: https://xxx.xxx.xxx.xxx:9443/_plugin/head/index.html
- If you want to play around with what's been placed on your server then its worth knowing about these directories:
/var/chef-solo/cookbooks/
/usr/local/elasticsearch-0.19.3/plugins/
/usr/local/etc/elasticsearch/
- A good way to run cleanup and test any changes that you might make to the jetty plugin's chef replated files:
- sudo rm -f /usr/local/etc/elasticsearch/keystore && sudo rm -f /usr/local/etc/elasticsearch/elasticsearch.yml.jetty && sudo rm -f /usr/local/etc/elasticsearch/logging.yml && sudo rm -f /usr/local/etc/elasticsearch/jetty.xml && sudo rm -f /usr/local/etc/elasticsearch/jetty-ssl.xml && sudo rm -rf /usr/local/elasticsearch-0.19.3/plugins/jetty/ && sudo cp /usr/local/etc/elasticsearch/elasticsearch.yml.pre.jetty.backup /usr/local/etc/elasticsearch/elasticsearch.yml && ls -alrt /usr/local/etc/elasticsearch/ && ls -alrt /usr/local/elasticsearch-0.19.3/plugins/ && cd /var/chef-solo/cookbooks/elasticsearch && sudo git pull origin
This cookbook installs and configures the elasticsearch search engine and database.
It requires a working Java installation on the target node. The cookbook downloads the elasticsearch tarball from Github, unpacks and moves it to the directory you have specified in the node configuration (/usr/local
by default).
It also installs a service which enables you to start, stop, restart and check status of elasticsearch.
If your node has the monit
recipe available, it will also create a configuration file for Monit,
which will check whether elasticsearch is running, reachable by HTTP and the cluster is in the “green” state.
If you include the elasticsearch::plugin_aws
recipe, the appropriate plugin will be installed for you,
allowing you to use Amazon AWS features: node auto-discovery and S3/EBS persistence.
You may set your AWS credentials either in a “elasticsearch/aws” data bag,
or directly in the node configuration.
You may want to include the elasticsearch::proxy_nginx
recipe, which will configure Nginx as
a reverse proxy so you may access elasticsearch remotely with HTTP Authentication. (Be sure to
include a nginx
cookbook in your node setup as well.)
The cookbook also provides the elasticsearch::test
recipe, which populates the test_chef_cookbook
index with some sample data to check if the installation, the S3 persistence, etc is working.
Include the elasticsearch
recipe in the run_list
of a node. Then, upload the cookbook to the Chef server:
knife cookbook upload elasticsearch
To enable the Amazon AWS related features, include the elasticsearch::plugin_aws
recipe.
You will need to configure the AWS credentials, bucket names, etc.
You may do that in the node configuration (with knife node edit MYNODE
or at the Chef console),
but it is probably more convenient to store the information in a "elasticsearch" data bag:
mkdir -p ./data_bags/elasticsearch
echo '{
"id" : "aws",
"discovery" : { "type": "ec2" },
"gateway" : {
"type" : "s3",
"s3" : { "bucket": "YOUR BUCKET NAME" }
},
"cloud" : {
"aws" : { "access_key": "YOUR ACCESS KEY", "secret_key": "YOUR SECRET ACCESS KEY" },
"ec2" : { "security_group": "elasticsearch" }
}
}' >> ./data_bags/elasticsearch/aws.json
Do not forget to upload the data bag to the Chef server:
knife data bag from file elasticsearch aws.json
Usually, you will restrict the access to elasticsearch. However, it's convenient to be able to connect
to the elasticsearch cluster from curl
or HTTP client, or to use a management tool such as
bigdesk.
To enable authorized access to elasticsearch, you may want to include the elasticsearch::proxy_nginx
recipe,
which will install, configure and run Nginx as a reverse proxy, allowing users with proper
credentials to connect.
As with AWS, you may store the usernames and passwords in the node configuration, but also in a data bag item:
mkdir -p ./data_bags/elasticsearch
echo '{
"id" : "users",
"users" : [
{"username" : "USERNAME", "password" : "PASSWORD"},
{"username" : "USERNAME", "password" : "PASSWORD"}
]
}
' >> ./data_bags/elasticsearch/users.json
Again, do not forget to upload the data bag to the Chef server.
After you have configured the node and uploaded all the information to the Chef server, run chef-client
on the node(s):
knife ssh name:elasticsearch* 'sudo su - root -c "chef-client"'
The cookbook comes with a Vagrantfile
,
allowing you to test-drive the installation and configuration with Vagrant,
the tool for building virtualized development infrastructure.
First, make sure, you have both VirtualBox and Vagrant installed.
Then, clone this repository into elasticsearch
, somewhere on your development machine:
git clone git://github.com/karmi/cookbook-elasticsearch.git elasticsearch
Switch to the cloned repository:
cd elasticsearch
Download the required cookbooks (unless you already have them in ~/cookbooks
):
curl -# -L -k http://s3.amazonaws.com/community-files.opscode.com/cookbook_versions/tarballs/1184/original/apt.tgz | tar xz -C tmp/cookbooks
curl -# -L -k http://s3.amazonaws.com/community-files.opscode.com/cookbook_versions/tarballs/1421/original/java.tgz | tar xz -C tmp/cookbooks
curl -# -L -k http://s3.amazonaws.com/community-files.opscode.com/cookbook_versions/tarballs/1098/original/vim.tgz | tar xz -C tmp/cookbooks
curl -# -L -k http://s3.amazonaws.com/community-files.opscode.com/cookbook_versions/tarballs/1413/original/nginx.tgz | tar xz -C tmp/cookbooks
curl -# -L -k http://s3.amazonaws.com/community-files.opscode.com/cookbook_versions/tarballs/915/original/monit.tgz | tar xz -C tmp/cookbooks
We will use the Ubuntu Lucid 64 box, but you may want to test-drive this cookbook on a different OS, of course. Check out the available boxes at http://vagrantbox.es.
Now, launch the virtual machine with Vagrant (it will download the box unless you already have it):
vagrant up
The machine will be started and automatically provisioned with chef-solo. You'll see Chef debug messages flying by in your terminal, installing and configuring Java, Nginx, elasticsearch, etc. The process should take less then 15 minutes.
After the process is done, you may connect to elasticsearch via the Nginx proxy:
open 'http://USERNAME:PASSWORD@33.33.33.10:8080/test_chef_cookbook/_search?q=*'
Of course, you should connect to the box with SSH and check things out:
vagrant ssh
ps aux | grep elasticsearch
service elasticsearch status --verbose
curl http://localhost:9200/_cluster/health?pretty
attributes/default.rb
: version, paths, memory and naming settings for the nodeattributes/plugin_aws.rb
: Amazon Web Services settingsattributes/proxy_nginx.rb
: Nginx settingstemplates/default/elasticsearch.init.erb
: service init scripttemplates/default/elasticsearch.yml.erb
: main elasticsearch configuration filetemplates/default/elasticsearch-env.sh.erb
: environment variables needed by the Java Virtual Machine and elasticsearchtemplates/default/elasticsearch_proxy_nginx.conf.erb
: the reverse proxy configurationtemplates/default/elasticsearch.conf.erb
: Monit configuration file
Author: Karel Minarik (karmi@karmi.cz)
MIT LICENSE