There are situations where from a Docker container, you need to access services on the host machine. An example of this use-case is trying to test pdf-generation using a website hosted in your IDE environment from a container running on the same host pdf-bot.
From Docker 18.04 on-wards there is a convenient internal DNS Entry (host.docker.internal
) accessible from your containers that will resolve to the internal network address of your host from your Docker container’s network.
You can ping the host from within a container by running
ping host.docker.internal
To test this feature using this guide you will need
docker run -t -d ubuntu
This will return you the container id, in my case it is a77209ae9b0f11c80ce488eda8631a03e8444af94167fd6a96df5ee2e600da1f
docker exec -it <container id> /bin/bash
e.g. docker exec -it a77 /bin/bash
.
Note: you do not need to use full container id, you can use first 3 characters
From within the container run the following commands:
Get package lists – apt-get update
Install net-tools – apt-get install net-tools
Install DNS utilities – apt-get install dnsutils
Install iputils-ping –apt-get install iputils-ping
There is a dns service running on the containers network default gateway (eth01) that allows you to resolve to the internal IP address used by the host. The DNS name to resolve the host is host.docker.internal
.
Ping the host to establish that you have connectivity. You will also be able to see the host IP Address that is resolved.
ping host.docker.internal
note: you should use this internal DNS address instead of IP as the IP address of the host may change.
Services on the host should be advertising on either 0.0.0.0 or localhost to be accessible.
e.g. To access a service on the host running on localhost:4200 you can run the following command from within the host.
curl 192.168.65.2:4200
Note that if you use host.docker.internal
some web servers will throw "Invalid Host header" errors in which case you either need to disable host header check on your web server or use the IP Address instead of the host name
AWX is a web-based task engine built on top of ansible. This guide will walk you through installing AWX on a fresh CentOS7 machine. In this guide Docker is used without Docker-compose and the bare-minimum options were selected to get the application up and running. Please refer to the official guide for more information or options.
Virtual Machine Specs
All commands are assumed to be run as root.
If you are not already logged in as root, sudo before getting started
sudo su -
Make sure your ‘/etc/resolv.conf’ file can resolve dns. Example resolv.conf file
nameserver 8.8.8.8
Run
yum update
Note: If you are still unable to run a update you may need to clear your local cache.
yum clean all && yum makecache
Install Git
yum install git
Make a new directory and change to that directory
cd /usr/local
Clone the official git repository to the working directory
git clone https://github.com/ansible/awx.git
cd /usr/local/awx
Download and install ansible
yum install ansible
Download yum-utils
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
Set up the repository
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Install the latest version of Docker CE
sudo yum install docker-ce docker-ce-cli containerd.io
Enable the EPEL repository
yum install epel-release
Install PIP
yum install python-pip
Using pip install docker-py
pip install docker-py
Make should already be included in the OS, this can be verified using
make --version
If it has not been installed you can run
yum install make
Note: We will persist the PostgresDB to a custom directory.
Make the directory
mkdir /etc/awx
mkdir /etc/awx/db
Edit the inventory file
vi /usr/local/awx/installer/inventory
Find the entry that says "#postgres_data_dir" and replace it with
postgres_data_dir=/etc/awx/db
Save changes
Note: As of 12/03/2019, there is a bug running with docker, to overcome the bug you need to find in the inventory "#pg_sslmode=require" and replace it with
pg_sslmode=disable
Start the docker service
systemctl start docker
Change to the right path
cd /usr/local/awx/installer/
Run the installer
ansible-playbook -i inventory install.yml
Note: You can track progress by running
docker logs -f awx_task
Check if firewalld is turned on, if it is not it is recommended
To check:
systemctl status firewalld
To start:
systemcl start firewalld
Open up port 80
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --reload
You can now browse your host IP and access and enjoy "http://<your host ip>"!
Note: Default username is "admin" and password is "password"
Elasticsearch is a fantastic tool for logging as it allows for logs to be viewed as just another time-series piece of data. This is important for any organizations’ journey through the evolution of data.
This evolution can be outlined as the following:
Data that is not purposely collected for this journey will simply be bits wondering through the abyss of computing purgatory without a meaningful destiny! In this article we will be discussing using Docker to scale out your Logstash deployment.
If you have ever used Logstash (LS) to push logs to Elasticsearch (ES) here are a number of different challenges you may encounter:
When looking at solutions, the approach I take is:
Using Docker, a generic infrastructure can be deployed due to the abstraction of containers and underlying OS (Besides the difference between Windows and Linux hosts).
Docker solves the challenges inherent in the LS deployment:
I.e. Let’s say you have 1M logs required to be logged per day, and have a requirement to have 3 virtual machines for a maximum of 1 virtual machine loss.
Why not deploy straight onto OS 3 Logstashes sized at 4 CPU and 8 GB RAM?
Let’s take a look at how this architecture looks,
When a node goes down the resulting environment looks like:
A added bonus to this deployment is if you wanted to ship logs from Logstash to Elasticsearch for central and real-time monitoring of the logs its as simple as adding Filebeats in the docker-compose.
What does the docker-compose look like?
version: '3.3'
As with most good things, there is a caveat. With Docker you add another layer of complexity however I would argue that as the docker images for Logstash are managed and maintained by Elasticsearch, it reduces the implementation headaches.
In saying this I found one big issue with routing UDP traffic within Docker.
This issue will cause you to lose a proportional number of logs after container re-deployments!!!
Disclaimer: This article only represents my personal opinion and should not be considered professional advice. Healthy dose of skeptism is recommended.
]]>Let’s quickly do a checklist of what we have so far
If you have not completed the steps above, review part 1 and part 2.
SSH into the virtual machine and swap to the root user.
Move to the root directory of the machine (Running cd /
)
Create two directories (This is done for simplicity)
Mkdir /certs
Mkdir /docker
Swap to the docker directory
cd /docker
Create a docker compose file with the following content (It is case and space sensitive, read more about docker compose).
Unfortunately, Nginx-Proxy must read the SSL certificates as <domain name>.crt and the key as <domain name>.key. as such we need to move and rename the original certificates generated for our domain.
Run the following commands to copy the certificates to the relevant folders and rename:
cp /etc/letsencrypt/live/<your domain>/fullchain.pem /certs/<your domain>.crt
cp /etc/letsencrypt/live/<your domain>/privkey.pem /certs/<your domain>.key
The docker compose file will dictate our stack.
Run the following command to create the file at /docker/docker-compose.yml
vi /docker/docker-compose.yml
Populate the file with the following content
Line by line:
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy #nginx proxy image
ports:
- "443:443" #binding the host port 443 to container 443 port
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /certs:/etc/nginx/certs #Mounting the SSL certificates to the image
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- VIRTUAL_HOST=<Your DOMAIN ie. domain.com.au>
networks:
- webnet
environment:
– VIRTUAL_HOST=<your domain ie. Domain.com.au>
networks:
webnet:
Save the file by press esc than :wq
Start docker
systemctl start docker
Pull the images
docker pull jwilder/nginx-proxy:latest
docker pull dockersamples/visualizer
Start the swarm
docker swarm init
Deploy the swarm
docker stack deploy -c /docker/docker-compose.yml test-stack
Congratulations! If you have done everything right you should now have a SSL protected visualizer when you browse https://<your domain>
To troubleshoot any problems check all services have a running container by running
docker service ls
Check the replicas count. If the nginx image is not running, check that the mounted .certs path does exist.
If the nginx container is running, you can run
docker service <service Id> logs --follow
then try access the https://<your domain> and see whether the connection is coming through.
One of the greatest motivations for me is seeing the current open-source projects. It is amazing to be apart of a community that truly transcends race, age, gender, education that culminates in the development of society changing technologies, it is not difficult to be optimistic about the future.
With that, lets deploy a containerized application behind a Nginx Reverse Proxy with a free SSL encrypted. This entire deployment will only cost you a domain.
The technologies used in this series are:
To start, I would advise signing up to a Azure trial . This will help you get started without any hassle.
If you have your own hosted VM or are doing a locally hosted docker stack please feel free to skip this part and move onto part 2.
Note: Technically you can use any image that can run docker.
You can leave default settings for the settings. (I switched off auto-shutdown).
Note: Make sure public IP address has been enabled
Wait for the Virtual Machine to finishing deploying…
After the machine has been successfully configured, browse to the virtual machine in Azure and get the public IP.
Log onto your domain provide (i.e. godaddy.com) and create a TXT file to point your domain address to the newly created VM.
Do a simple “nslookup
<domain>” till you can confirm that the domain has been updated.
Browse to the virtual machine and browse to “Networking” in Azure. The following ports need to be allowed for inbound traffic
443 – This will be used to receive the SSL protected HTTPS requests
80 – This will be used temporarily to recieve your SSL certificate
22 – This should be open already however if it isn’t, allow 22 traffic for SSH connections.
Using putty if you are on windows or just terminal on a Mac or Linux workstation, attempt to SSH into the machine.
After successfully logging in (Using the specified credentials when creating the VM), enable the root user for ease of use for the purpose of this tutorial (Do not do this for production environments).
This can be done by running
sudo passwd root
Specify the new root password
Confirm the root password
Congratulations you have completed part 1 of this tutorial, now that you have a virtual machine ready, let move on to part 2.
]]>