Table of Contents
Vagrant
By Hashicorp.
Workflow
Overview commands
box - manage "boxes" destroy - shut down the VM then delete its stored image? gem halt - shut down the VM init - prepare a directory with a new Vagrantfile package - shut down the VM, then convert it to a 'package' which can be turned into a box? (Or something) provision - run just the provisioning (eg, Chef, Puppet...) stage reload - modify the VM configuration (eg, reapply Vagrantfile), reboot the VM, reprovision the modified parts of the vagrantfile resume - un-suspend (ie, unhibernate) ssh - open an SSH shell connection to the VM ssh-config status suspend - hibernate the VM up - some or all of: copy a VM image to create a new VM, apply configuration to it, boot it
Vagrantfile
The variable from outside the provisioning block can be accessed as
"#{PATH_VAGRANT}"
The ENvironment Varialbe from the host can be accessed as. This is the way to pass infos to the vagrantfile from the host system. Into the provisioner it then may be passed in as a vagrant variable
GIT_USERNAME=ENV['GIT_USERNAME'] # Use shell script to provision config.vm.provision "shell", inline: <<-SHELL echo "#{PATH_VAGRANT}" SHELL
Getting the path to the folder of the vagrantfile e.g. in order to copy files from the relative location to the vm.
PATH_VAGRANT=File.dirname(__FILE__) # Use shell script to provision config.vm.provision "shell", inline: <<-SHELL # echo the full path echo -e "Source: #{PATH_VAGRANT}\.aws\config" SHELL
Copying files is done with an own provisioner
PATH_VAGRANT=File.dirname(__FILE__) # upload the AWS configs config.vm.provision "file", source: "#{PATH_VAGRANT}\\.aws\\config", destination: "~/.aws/config" config.vm.provision "file", source: "#{PATH_VAGRANT}\\.aws\\credentials", destination: "~/.aws/credentials" config.vm.provision "file", source: "#{PATH_VAGRANT}\\.aws\\answerfile", destination: "~/.aws/answerfile"
Escaping
The escapes, e.g. echo of quotes - must be escaped double times :
\\"
docker exec tomcat8 bash -c "echo -e '<role rolename=\\"manager-gui\\"/>' >> /usr/local/tomcat/conf/tomcat-users.xml"
Box
Howto create a new box
# list vms vboxmanage list vms # make sure its the correct one vboxmanage showvminfo vagrant_default_1566982938109_10062 # create a new box vagrant package --base vagrant_default_1566982938109_10062 --output devenv.v190819.box # add box to the known boxes vagrant box add devenv.v190819 devenv.v190819.box # list boxes and see the new box added to vagrant vagrant box list
Now you can reference the new box
# reference the new box ... Vagrant.configure("2") do |config| config.vm.box = "myjenkins" ...
Deleting VMs
vboxmanage list vms
"<inaccessible>" {5642f0cb-6042-486f-8ea6-7443cb5815b7} "Jenkins-Vagrant_default_1507401281630_21314" {8e3fd4ba-cda9-4539-9781-bbf7ad4fe423}
vvboxmanage unregistervm Jenkins-Vagrant_default_1507401281630_21314 --delete vboxmanage unregistervm Jenkins-Vagrant_default_1507401281630_21314 --delete
Commands
# clear the global-status cache, if a machine shows up, which is not running global-status --prune
Plugins
The complete list: https://github.com/hashicorp/vagrant/wiki/Available-Vagrant-Plugins
VBox additions
VBox additions installation. To enable folder mounting on any OS.
vagrant plugin install vagrant-vbguest vagrant vbguest
Winnfsd - helps handling NFS mounting problems on WIndows.
vagrant plugin install vagrant-winnfsd
VBox SCP
To copy files from the host to the guest.
vagrant plugin install vagrant-scp
Shell Provisioner
Echo multiple lines
mkdir -p /home/vagrant/.docker/ echo -e " { \"proxies\": { \"default\": { \"httpProxy\": \"http://localhost:8500\", } } } " > /home/vagrant/.docker/config.json
Connect via putty
Those scripts will use the vagrant configuration output, to connect to the vagrant machine via putty.
The normal way would be to install the special plugin, made for that :)
vagrant install plugin vagrant-multi-putty vagrant putty
If you want to go the long way, you can write an own script to do so
The putty client supports the full color palette and so has a better UX, than the CMD.
Put those files to that folder on the path, e.g. near the “vagrant.exe”
vagrantssh.ps1
A script to start the putty client and connect to the vagrant machine
$puttypath="D:\Programme\Putty\putty.exe" $vagrantProjectConfigs = & vagrant.exe ssh-config # parse the vagrant configs $port = ( $vagrantProjectConfigs | Select-String -Pattern "Port" -SimpleMatch ) -replace '^[ ]*Port[ ]+(.+)[ ]*', '$1' $identityfile = ( $vagrantProjectConfigs | Select-String -Pattern "IdentityFile" -SimpleMatch ) -replace '^[ ]*IdentityFile[ ]+(.+)[ ]*', '$1' $hostName = ( $vagrantProjectConfigs | Select-String -Pattern "HostName" -SimpleMatch ) -replace '^[ ]*HostName[ ]+(.+)[ ]*', '$1' & $puttypath vagrant@$hostName -pw vagrant -P $port -i "$identityfile"
vagrantssh.bat
A helper, to call the powershell script above.
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%~dp0/vagrantssh.ps1'"
Useful plugins
Listed here https://github.com/hashicorp/vagrant/wiki/Available-Vagrant-Plugins
vagrant install plugin vagrant-multi-putty vagrant install plugin vagrant-scp vagrant install plugin vagrant-proxy
SSH Tunneling from Vagrant to host
you can create a tunnel exposing it on the host machine
First forward the ports on Vagrant to Host. This makes the guest-port of vm 4440 available on host port 4440
# forwarding ports config.vm.network "forwarded_port", guest: 4441, host: 4441 # rundeck
Then establish a tunnel. Establishing a tunnel from port 4440 on guest vm to remote machine i-012412412ASF2.
The important part here - is NOT to omit the ip, setting it to 0.0.0.0, so that the ssh tunnel gets exposed on all network adapters, not only localhost and so can be reached from host
ssh ec2-user@i-012412412ASF2 -i ~/.ssh/ec2key.priv.openssh.ppk -fNTC -L 0.0.0.0:4441:localhost:4440
Now you can reach your rundeck application on host under: localhost:4441
which redirects to guest-vm port: localhost:4441
which redirects, via the ssh-tunnel to remote server i-012412412ASF2: i-012412412ASF2:4440
In this example the session-manager is involved to create the tunnel, this part is optional see https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
Setting up Rsync folder-synch on Windows
Why
When developing code in an IDE on Windows-Host
I would like to have the possibility
of running code in the Guest-VM. In an environment as close to the deployment as possible:
e.g. on Kubernetes, integrated with other containers.
File-system watcher:
Requirement:
On the Guest - the changes of “file-system” of code-project must be recognized
and the code-project rebuilt.
Also changes in the IDE on Windows-Host
must trigger changes on the file-system in “Guest-VM”.
Problem:
When sharing a code-project via Oracle-Virtualbox shared-folders
and files where changed on IDE on Windows-Host
unfortunately, NO changes where triggered on Guest file-system-watcher.
When sharing a code-project via Rsync - the requirement was fulfilled.
How
An example of setting up the environment
- On Guest. Install Cygwin, to get Rsync on Windows
- On Guest. Add the “code-project-folder” to the same folder, where “Vagrantfile” is located
- Configure Vagrantfile to sync “code-project-folder”
- On Guest. Run automatic sync command in one cmd-shell
- Ssh into Vagrant. Run continuos build in one shell
- Configure the Spring Boot application to hot-deploy changes
- Ssh into Vagrant. Run application like Spring Boot application
On Guest. Install Cygwin, to get Rsync on Windows
winget install -e --id Cygwin.Cygwin
On Guest. Add the "code-project-folder" to the same folder, where "Vagrantfile" is located
cd "VagrantProjectFolder" mkdir "spring-boot-code-project-folder"
Configure Vagrantfile to sync "code-project-folder"
Sync folder `spring-boot-code-project-folder`. Based on `checksum`, not timestamp.
Vagrantfile
... config.vm.synced_folder "./spring-boot-code-project-folder", "//mnt/spring-boot-code-project-folder/", type: "rsync", rsync__auto: true, rsync__args: ["--verbose", "--ignore-times", "--checksum", "--rsync-path='rsync'", "--archive", "--delete", "-z"], id: "spring" ..
On Guest. Run automatic sync command in one cmd-shell
Ssh into Vagrant. Run continuos build in one shell
See https://docs.gradle.org/current/userguide/command_line_interface.html#sec:continuous_build
Skip the tests.
Dont use the daemon.
vagrant ssh cd "/mnt/spring-boot-code-project-folder" ./gradlew build -xtest --no-daemon --continuous
Configure the Spring Boot application to hot-deploy changes
The Spring boot applications needs to be advised,
to take up any rebuilt app and hot-deploy it.
Add dependency to “devtools”.
build.gradle
dependencies { // makes Spring boot hot swap rebuilt jar to tomcat developmentOnly 'org.springframework.boot:spring-boot-devtools' }
Ssh into Vagrant. Run application like Spring Boot application
Error: Authentication failure. Retrying...
Thats also the reason for disk /mnt/d not being mounted correctly
# give the home permissions back to vagrant # without that - /home/vagrant is owned by root. # and also /home/vagrant/.ssh/authorized_keys # so that no ssh connection works and # vagrant ssh - failes with "Error: Authentication failure. Retrying..." sudo chown -R vagrant:vagrant "/home/vagrant"