Set up and manage a local Nomad Cluster using Consul, Vault, Traefik and Vagrant.
$ vagrant up --provider=libvirt
$ vagrant ssh node-1 -c " sudo ifconfig"
# Consul
$ vagrant ssh node-1 -c " consul catalog nodes"
$ vagrant ssh node-1 -c " consul catalog services -node=node-1 -tags"
# Vault
$ curl http://192.168.0.111:8200/v1/sys/health | python -m json.tool # Check Vault 'unsealed'
$ vault kv put secret/aws/s3 aws_access_key_id=somekeyid # Create new secret
$ curl -X PUT -H " Content-Type: application/json" -d ' {"key":"****"}' \
" http://192.168.0.111:8200/v1/sys/unseal" # Unseal
# Nomad
$ NOMAD_ADDR=" http://192.168.122.111:4646" nomad server members -detailed
$ nomad node-status
$ nomad job run < path/to/jobspec.nomad>
$ nomad job status < Job_ID>
$ nomad job stop < Job_ID> "
$ docker exec -i -t ansible_controller bash
$ rsync -avz -e " ssh -i $HOME /.vagrant.d/insecure_private_key" \
example/ root@192.168.0.111:/root/example
When being launched the entrypoint will pull all ansible-galaxy roles as
defined in the ansible/requirements.yml file.
# Action Command
1 Clone the repository host:~/ ❯ git clone <nomad-cluster>
2 Edit Vagrantfile and change the IP addresses accordingly with your own local network Also ensure match with ansible/inventory/group_vars/service_traefik/traefik.yml
3 Spin up Vagrant machines host:<nomad-cluster> ❯ vagrant up --provider=libvirt
4 Build the container for the Ansible controller host:<nomad-cluster> ❯ docker-compose up --build
5 Access Ansible Controller host:<nomad-cluster> ❯ docker exec -i -t ansible_controller bash
6 Create siimlink to ansible-vault script $ ln -s ./ansible/scripts/vault-pass-client.py vault-pass-client.py
# Action Command
1 Change the IP addresses of the hosts file accordingly with what you set in the Vagrantfile and local network $ vagrant ssh node-1 -c "sudo ifconfig"
3 Run the individual playbooks a first time (use any password as Vault password) to deploy Consul+Dnsmasq, and Vault (ansible_controller) ~$ ansible-playbook ansible/consul.yml + vault.yml
4 Unseal Vault: head to http://192.168.0.111:8200/ and follow the process (remember to store the master key and the root token) http://192.168.0.111:8200/
5 Provide a Vault token to Nomad (remember to store the new password) In ‘General Instructions’ follow the ‘Integrate Ansible-Vault’ section and insert encrypted token into ansible/inventory/group_vars/service_nomad.yml
6 Run the playbook a second time to configure Nomad with the Vault token ansible-playbook --vault-id dev@vault-pass-client.py ansible/site.yml
3. New Applications Deployment
# Action Command
1 Find the Nomad job description /home/vagrant/example/APPNAME.nomad
2 Inside the Nomad server run the levant deployment script (vagrant@node-1):~/example$ bash ./deploy-dev.sh
$ curl http://192.168.122.111/0/
$ curl -H Host:vault.service.consul http://192.168.122.111/ui/
# Action Command
1 Stop the Vagrant machines host:<nomad-cluster> ❯ vagrant halt/destroy
The variable nomad_vault_token needs to be created via Ansible-Vault
In the Ansible-Controller set up gpg and pass
# 'pass' needs a pregenerated 'gpg' key
(ansible_controller)~$ gpg --list-key # will be empty
(ansible_controller)~$ gpg --full-generate-key # the key will have a full name
# Create password-store and ansible-vault encryption password
(ansible_controller)~$ pass init " <gpg_full_key_name>"
(ansible_controller)~$ pass insert dev # 'dev' is the vault-id label
# Create inline secret to be used with ansible-vault
# Ensure keyname 'dev' matches in ansible.cfg and ansible/script/pass-keyring-client.py l.131 !!
# 130 if not keyname:
# 131 keyname = "dev" <-- HERE
$ ansible-vault encrypt_string --vault-id dev@vault-pass-client.py ' [VAULT_ROOT_TOKEN]' --name ' nomad_vault_token'
(ansible_controller)~$ ansible-playbook --vault-id dev@ansible/scripts/vault-pass-client.py ansible/nomad.yml
(ansible_controller)~$ ansible-playbook --vault-id dev@prompt ansible/nomad.yml
# Confirm Nomad
http://192.168.0.111:4646
Error while activating network: Call to virNetworkCreate failed: internal
error: Network is already in use
# This is a libvirt/KVM network problem
# 1. Identify default interface
$ sudo virsh net-list
$ sudo virsh net-dumpxml vagrant-libvirt
$ sudo virsh net-edit vagrant-libvirt
# 2. Delete Interface and vagrant virtual network manually
# You can also check libvirt networks in VMM > Edit > Connection Details > Virtual Networks
$ sudo virsh net-destroy vagrant-libvirt
Olaf Marangone
Contact: olmighty99@gmail.com