CAUTH2.0: A Open Security Framework for Smart Cryptographic Wallets

Introduction

Although Bitcoin, Ethereum and the general concept of Cryptocurrency is much more widespread then when I got into it, I am still surprised to find how difficult it is for my friends and family to get started. Whether they have to create paper wallets via myetherwallet.com and having to save their seed to a password manager (or for the less tech savy to a text file on their desktop) or trying to use Ledger the experience feels alien compared to pretty much every other software ecosystem.

CAuth2.0 is my proposal to solve this issue. The original goal was to create a user experience as close to what users commonly experience (email, password, 2FA) but with the same “level” of security as hardware wallets. At a high level the solution utilizes client-side encryption similar to secure password managers but combined with 2FA multi-signature workflows to ensure network security.

You can read the full framework here.

Key access: How does it work?

The framework proposes a client encrypted key storage system that ensures that the service-provider never has access to private keys. This methodology results in users having access to their keys using a Email + Password + Master Password. Admittedly the master password is an additional piece of information users must store however it has a added advantage of being disposable and customizable which is not the same as a seed or private key.

Vault Creation and Key Storage Process

Key Retrieval Process

Two-factor Multi-signature: How does it work?

By using smart-account functionality and a intermediary security layer, users are able to provide authorization for transactions via a second methodology i.e. Email or SMS.

Transaction Broadcasting Communication Flow

Sample Deployment Architecture

Conclusion

By addressing these challenges, users of CAuth2.0 can have a familiar experience for primary authentication/authorization (OAuth2.0) and transaction broadcasting (two-factor confirmations). The proposed framework is compatible with all Smart Account enabled blockchains and can be implemented in a multi-service provider ecosystem.

Exporting data from Elasticsearch using Python

It is a common requirement to export the data in Elasticsearch for users in a common format such as .csv. An example of this is exporting syslog data for audits. The easiest way to complete this task I have found is to use python as the language is accessible and the Elasticsearch packages are very well implemented.

In this post we will be adapting the full script found here.

1. Prerequisite

To be able to test this script, we will need:

  • Working Elasticsearch cluster
  • Workstation that can execute .py (python) files
  • Sample data to export

Assuming that your Elasticsearch cluster is ready, lets seed the data in Kibana by running:

POST logs/_doc
{
  "host": "172.16.6.38",
  "@timestamp": "2020-04-10T01:03:46.184Z",
  "message": "this is a test log"
}

This will add a log in the "logs" index with what is commonly ingested via logstash using the syslog input plugin.

2. Using the script

2.1. Update configuration values

Now lets adapt the script by filling in our details for lines 7-13

  • username: the username for your Elasticsearch cluster
  • password: the password for your Elasticsearch cluster
  • url: the url of ip address of a node in the Elasticsearch cluster
  • port: the transport port for your Elasticsearch cluster (defaults to 9200)
  • scheme: the scheme to connect to your Elasticsearch with (defaults to https)
  • index: the index to read from
  • output: the file to output all your data to

2.2. Customizing the Query

By default the script will match all documents in the index however if you would like to adapt the query you can edit the query block.

Note: By default the script will also sort by the field "@timestamp" descending however you may want to change the sort for your data

2.3. Customizing the Output

Here is the tricky python part! You need to loop through your result and customize how you want to write your data-out. As .csv format uses commas (new column) and new line values (\n) to format the document the default document includes some basic formatting.

1.The output written to the file, each comma is a new column so the written message will look like the following for each hit returned:

column 1 column 2 column 3
result._source.host result._source.@timestamp result._source.message

2. Note that when there is a failure to write to the file, it will write the message to a array to print back.

3. At the end of the script, all the failed messages will be re-printed to the user

2.4. Enjoying your hardwork!

Looking at your directory you will see a output.csv now and the contents will look in excel like:

How to access the Docker host from a Docker container

There are situations where from a Docker container, you need to access services on the host machine. An example of this use-case is trying to test pdf-generation using a website hosted in your IDE environment from a container running on the same host pdf-bot.

Summary

From Docker 18.04 on-wards there is a convenient internal DNS Entry (host.docker.internal) accessible from your containers that will resolve to the internal network address of your host from your Docker container’s network.

You can ping the host from within a container by running

ping host.docker.internal

Proof

To test this feature using this guide you will need

  • Docker 18.03+ installed
  • Internet Access

Steps

Step 1: Run the Docker container using the command

docker run -t -d ubuntu

This will return you the container id, in my case it is a77209ae9b0f11c80ce488eda8631a03e8444af94167fd6a96df5ee2e600da1f

Step 2. Access the container by running

docker exec -it <container id> /bin/bash

e.g. docker exec -it a77 /bin/bash.

Note: you do not need to use full container id, you can use first 3 characters

Step 3. Set up the container

From within the container run the following commands:

Get package lists – apt-get update

Install net-tools – apt-get install net-tools

Install DNS utilities – apt-get install dnsutils

Install iputils-ping –apt-get install iputils-ping

Step 4. Using the DNS Service

There is a dns service running on the containers network default gateway (eth01) that allows you to resolve to the internal IP address used by the host. The DNS name to resolve the host is host.docker.internal.

Step 5. Pinging the host

Ping the host to establish that you have connectivity. You will also be able to see the host IP Address that is resolved.

ping host.docker.internal

note: you should use this internal DNS address instead of IP as the IP address of the host may change.

In this example, the host can be resolved to the IP 192.168.65.2
  1. Accessing other services on the host

Services on the host should be advertising on either 0.0.0.0 or localhost to be accessible.

e.g. To access a service on the host running on localhost:4200 you can run the following command from within the host.

curl 192.168.65.2:4200

Note that if you use host.docker.internal some web servers will throw "Invalid Host header" errors in which case you either need to disable host header check on your web server or use the IP Address instead of the host name

References

  • Stack overflow discussing solution here

  • Official Documentation here

Spoofing UDP Traffic with Logstash

Solving a 2 year old problem

Logs are usually sent via UDP traffic and most commonly available as a syslog message:

  • UDP: Best effort process-to-process based communication
  • TCP: Reliable host-to-host based communication

These logs in many products (especially SIEM – Security information and event management) have their sources identified using the source-IP of the packet instead of the content of the message (Often the content of the log does not contain source identification information, this is a result of poor logging design), as such in more complex topologies with log caching or staged log propagation, the source of the logs cannot differentiated by the final system.

Figure 1: Example of simple caching

This may not matter in a environment where the cache is only caching logs from a single device, however if the cache is centralizing logs from multiple sources, it makes it impossible for the SIEM to differentiate device sources for the logs impacting the functionality available.

Figure 2: Example of more complex caching

This has been a ticket since 04/05/2017.

The ELK Log Forwarding Topology

Elasticsearch is commonly used as a centralization point for logs due to its high ingestion capability as well as the extensive libraries of plugins that can be used for collecting, transforming and forwarding almost all types of data. This is especially important for Service Providers who may have a responsibility such as to both store a copy of logs as well as send a copy to customers for their usage in their own SIEM solutions.

Figure 3: Example of customer forwarding topology using the ELK stack

Logstash Output Plugin – Spoof

The default UDP Logstash output plugin does not allow for spoofing the source device (IP Address, Port and MAC). I have written a new Logstash output to enable this behavior. The plugin is able to, on every individual message spoof the Source IP, Source Port and Source MAC Address of the packet. To use the plugin you need to specify additional information about the target device:

  • Destination MAC Address (This cannot be resolved from the ARP table at this point in time)
  • Interface the traffic is intended to be spoofed from

Getting Started

Pre-Requisites

The plugin uses the jnetpcap library and therefore requires a number of pre-requisites on the host to be completed:

  • Add jnetpcap library to OS
  • Install Plugin

JNetPCAP

It is possible to run the library on different operating systems, I have tested on Ubuntu 18.04. For instructions on how to run it on other operating systems, there are notes in the Release Notes of the library.

After deploying a new Ubuntu Server with the default Logstash installation, complete the following steps:

  1. Download the jnetpcap library:

wget -O jnetpcap-1.4.r1425 https://downloads.sourceforge.net/project/jnetpcap/jnetpcap/Latest/jnetpcap-1.4.r1425-1.linux64.x86_64.tgz

  1. Unzip the files

tar -xvf jnetpcap-1.4.r1425

  1. Copy the library to the /lib folder

cp jnetpcap-1.4.r1425/libjnetpcap.so /lib/

  1. Install libpcap-dev (Ubuntu)

sudo apt-get install libpcap-dev

Note: Using Centos, the package is only available via the RHEL optional channel.

Note: If you are running logstash as a service, the default permissions for the logstash user are not sufficient, run the service as root (If anyone knows the exact permissions to harden please DM me).

This can be done by editing /etc/systemd/system/logstash.service if you are using systemctl.

Installing the plugin

You can download the source code and build the code yourself. Alternatively you can download the gem directly from here.

  1. Move to the logstash folder

cd /usr/share/logstash

  1. Install the plugin

./bin/logstash-plugin install --no-verify <path-to-gem>/logstash-output-spoof-0.1.0.gem

Testing the plugin

  1. Create a test pipeline to test

vi /usr/share/logstash/test.conf

  1. Copy the following pipeline into the file

Note: Remember to replace the values marked to be replaced

input {
  generator { message => "Hello world!" count => 1 }
}
filter {
  mutate {
   add_field => {
      "extra_field" => "this is the test field"
      "src_host" => "3.3.3.3"
   }
   update => {"message" => "this should be the new message"}
  }
}
output {
  spoof {
    dest_host => "<REPLACE WITH YOUR DESTINATION IP>"
    dest_port => "<REPLACE WITH YOUR DESTINATION PORT>"
    src_host => "%{src_host}"
    src_port => "2222"
    dest_mac =>  "<REPLACE WITH YOUR DESTINATION MAC ADDRESS>"
    src_mac => "<REPLACE WITH YOUR MAC ADDRESS>"
    message => "%{message}"
    interface => "ens32"
  }
}
  1. On the DESTINATION device, you can run tcpdump to collect and observe the traffic

sudo tcpdump -A -i any src 3.3.3.3 -v

Note: You may need to install tcpdump

  1. On the server hosting the logstash from the /usr/share/logstash path, start the pipeline

./bin/logstash -f test.conf

Note: Be patient, Logstash is very slow to start up

On the target system you are capturing traffic from you should see the source of the packet is coming from 3.3.3.3! Congratulations on spoofing your first message.

Figure 4: Sample of captured data

Conclusion

In this post I have demonstrated how you can use the new Logstash Plugin to spoof traffic, as this can be done using event based data this plugin can be used to support many exotic deployment topologies that are SIEM compliant.

Using this plugin, hopefully you can support complex log forwarding topologies regardless of what technologies the end device uses.

Figure 5: Sample of Advanced Log Forwarding Topology

Friendly Guide to deploying AWX (Upstream project to Ansible Tower)

Summary

AWX is a web-based task engine built on top of ansible. This guide will walk you through installing AWX on a fresh CentOS7 machine. In this guide Docker is used without Docker-compose and the bare-minimum options were selected to get the application up and running. Please refer to the official guide for more information or options.

Prerequisites

Virtual Machine Specs

  • At least 4GB of memory
  • At least 2 cpu cores
  • At least 20GB of space
  • Centos7 Image

Checklist

  1. Operating System
    • [ ] Update OS
    • [ ] Install Git
    • [ ] Clone AWX
    • [ ] Install Ansible
    • [ ] Install Docker
    • [ ] Install Docker-py
    • [ ] Install GNU Make
  2. Config File
    • [ ] Edit Postgres settings
  3. Build and Run
    • [ ] Start Docker
    • [ ] Run Installer
  4. Access AWX
    • [ ] Open up port 80
    • [ ] Enjoy

1. Operating System

All commands are assumed to be run as root.

If you are not already logged in as root, sudo before getting started

sudo su -

Update OS

  1. Make sure your ‘/etc/resolv.conf’ file can resolve dns. Example resolv.conf file

    nameserver 8.8.8.8
    
  2. Run

    yum update

    Note: If you are still unable to run a update you may need to clear your local cache.

    yum clean all && yum makecache

Install Git

  1. Install Git

    yum install git

Clone AWX

  1. Make a new directory and change to that directory

    cd /usr/local

  2. Clone the official git repository to the working directory

    git clone https://github.com/ansible/awx.git

    cd /usr/local/awx

Install Ansible

  1. Download and install ansible

    yum install ansible

    Unofficial Reference Documentation

Install Docker

  1. Download yum-utils

    sudo yum install -y yum-utils \
    device-mapper-persistent-data \
    lvm2
    
  2. Set up the repository

    sudo yum-config-manager \
    	--add-repo \
    	https://download.docker.com/linux/centos/docker-ce.repo
    
  3. Install the latest version of Docker CE

    sudo yum install docker-ce docker-ce-cli containerd.io

    Official Reference Documentation

Install Docker-py

  1. Enable the EPEL repository

    yum install epel-release

  2. Install PIP

    yum install python-pip

    Unofficial Reference Documentation

  3. Using pip install docker-py

    pip install docker-py

    Official Reference Documentation

Install GNU Make

  1. Make should already be included in the OS, this can be verified using

    make --version

    If it has not been installed you can run

    yum install make

2. Config File

Edit Postgres Settings

Note: We will persist the PostgresDB to a custom directory.

  1. Make the directory

    mkdir /etc/awx

    mkdir /etc/awx/db

  2. Edit the inventory file

    vi /usr/local/awx/installer/inventory

    Find the entry that says "#postgres_data_dir" and replace it with

    postgres_data_dir=/etc/awx/db

    Save changes

    Note: As of 12/03/2019, there is a bug running with docker, to overcome the bug you need to find in the inventory "#pg_sslmode=require" and replace it with

    pg_sslmode=disable

3. Build

Start docker

  1. Start the docker service

    systemctl start docker

Run installer

  1. Change to the right path

    cd /usr/local/awx/installer/

  2. Run the installer

    ansible-playbook -i inventory install.yml

    Note: You can track progress by running

    docker logs -f awx_task

4. Access AWX

Open up port 80

  1. Check if firewalld is turned on, if it is not it is recommended

    To check:

    systemctl status firewalld

    To start:

    systemcl start firewalld

  2. Open up port 80

    firewall-cmd --permanent --add-port=80/tcp

    firewall-cmd --reload

Enjoy

  1. You can now browse your host IP and access and enjoy "http://<your host ip>"!

    Note: Default username is "admin" and password is "password"