Welcome

Welcome to my external memory and information on what I'm currently working on. Here you will find snippets of code, tricks and solutions I, or maybe you, may need to reference.

Feel free to comment your suggestions and any constructive criticisms that you may have.

Thursday, 23 October 2014

Debian Root Partition Encryption Using LUKS, dm-crypt and a Keyfile

The Problem

Debian has offered the option of encrypting your hard drive during installation for some time, however, it only allows you to unlock the drive using a password which is good but not perfect. Unfortunately the installer also doesn't allow you to install to an pre-encrypted hard drive removing the option of setting up an encrypted partition that is unlocked using a keyfile before hand and installing into it.

The Solution

Since the Debian installer does not work with pre-encrypted drives we have to first install Debian on the drive before it's encrypted with /boot being on its own partition; I will explain why in the following paragraph. We'll then backup the installed root partition to another drive, encrypt the partition and restore the contents of the partition. Then we'll need to manually decrypt the drive, chroot it, set up our boot process using initramfs to ensure the dm-crypt module is loaded and configure the system to decrypt the root partition during boot.
There is one thing to bare in mind when dealing with an encrypted root partition. In order to decrypt the root partition the boot partition must be unencrypted otherwise you'll be caught in a catch 22: You'll need to decrypt your boot partition in order to access the logic you need to decrypt your boot partition. So when setting up the base system be sure to create a /boot partition as well.

Things You'll Need

  • Debian (A recent enough version which includes the dm-crypt module and the creyptsetyp package. I have read conflicting reports of this being availible on Debian 6 (Squeeze) but it will be availible on Debian 7 (Wheezy) and later as this is the version I used to set this up for myself.)
  • A live CD/DVD/USB. I suggest a recent version of Ubuntu for its hardware compatibility and ease of use, however, as long as it has cryptsetup and the dm-crypt module any should work.
  • A second hard drive to backup to and restore the root partition from.
  • External storage (be it a USB key, SD Card, etc...) that will be used as the key to unlock the system.

Install the Base System

Installing the base system is outside the scope of the article except for the fact that you must put /boot on its own partition. You will be fine with a /boot partition size of ~100MB-200MB however if you want to be safe and have the extra space it doesn't hurt to make it 512MB.

Backing up the Root Partition

We'll use rsync to backup the root partition to another location so it can be restored to the encrypted partition later.
rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found", "/boot/*"} /* /path/to/backup/folder

Command Breakdown

Note you should take a look at the rsync man page for a complete breakdown of this command as I will only be giving a high level overview of it.
  • -a Archive mode. This is a short cut for the options -rlptgoD and in essence recurse through all sub directories and preserves things like ownership, permissions and symlinks making it useful for backups.
  • -A Preserves ACLs (Access Control Lists).
  • -X Preserves extended attributes.
  • -v Verbose output.
  • --exclude={ ... } Folders to exclude during the backup.
  • /* Files we want backed up
  • /path/to/backup/folder Where we want to backup the root partition to. This MUST be on a different device.

Boot to Live CD/DVD/USB

Make sure your BIOS is set to boot from the specific device, insert it and go! Note that if you use UEFI it can cause issues booting to a Live CD/DVD as it may trump any settings in your BIOS if your BIOS is configured to use it.

Randomize Disk

This step is important since it can become apparent where you encrypted files are on your drive if you have any uniform data on it. A potential attack can then look through your drive bit by bit and pick out where the contents of files begin and end. Then knowing that they can attempt to brute force the encryption knowing that the chunk of data they have is probably contiguous data. In essence this step adds background noise to your drive so it's not apparent where your files are located on the partition. There are two ways to do this, one more thorough then the other.

Using badblocks

This method works well and is considerably faster then the second method (using dd and urandom), however, it doesn't randomize every bit which leaves the potential for this vulnerability to still be exploited
sudo badblocks -c 10240 -s -w -t random -v /dev/sdb
*Where /dev/sdb is the block device you wish to encrypt

Command Breakdown

  • -c 10240 specifiers that you want badblocks to you want to have badblocks test 10240 blocks at a time. This amount is quite excessive as the default is 64 but should make the command run faster as it should better fill the write buffer of the drive.
  • -s Shows the progress of the process.
  • -w This option tells badblocks to write test patterns to each block and read them back to verify the block's integrity.
  • -t random Tell badblocks to write random test patterns.
  • -v Verbose mode
  • /dev/sdb The block device to test

Using dd and /dev/urandom

This is the more complete way to randomize the disk as every bit gets semi-random data written to it but can take upwards of a two or three days for large devices (1GB+). Note we are using /dev/urandom and and not /dev/random as our input source as /dev/random is very slow at picking up random bits and would take an incredibly long time to fill a drive.
dd if=/dev/urandom of=/dev/sdb iflag=nocache oflag=direct bs=4096

Command Breakdown

  • if=/dev/urandom The input file to pull data from.
  • of=/dev/sdb The out file to push the data coming from the input file to.
  • iflag=nocache Don't cache data coming from the input file
  • oflag=direct Write directly to the output file
  • bs=4096 The block size, in bytes, to write out.

Prepare Keyfile

dd if=/dev/urandom of=/media/myusbkey/keyfile.enc bs=512 count=1 iflag=fullblock

Command Breakdown

  • if=/dev/urandom The input file to pull data from.
  • of=/media/myusbkey/keyfile.enc The out file to push the data coming from the input file to. This will be the key file that will have to be present in order for your system to boot.
  • bs=512 The block size, in bytes, to write out. This size of 512 is an arbitrary choice of mine and matches the maximum passphrase size for dm-crypt which is 512 characters. However, a keyfile can be up to 8192kB in size and one can make the assumption that the larger the keyfile the harder to gain access to the drive it'll be. However, it's important to note that with LUKS the passphrase/keyfile are just used to unlock the master encryption key which is used to decrypt the drive so in this case having a larger keyfile does not necessarily make it harder to gain access to the drive's contents.

Empty the LUKS Header

Full disclosure: I'm not sure this step is necessary as I have not tested the process without it. If I were to make an assumption this is probably done during the encryption process and shouldn't be required to be done manually.
If you haven't already make sure you've created your partitions you're going to use on the drive. From here on out we'll be working on encrypting the partition it self
Note: In the case of a LUKS partitions the header is the first 592 bytes of the partition, however, due to possible issues with partition alignment the encrypted data area of a LUKS partition doesn't start until after the second megabyte of the drive and header backup processes backup the first 2MB of the drive and as such we will be zeroing out the first 2048 bytes of the partition.

Using Head and Redirection

head -c 2048 /dev/zero > /dev/sda2; sync

Command Breakdown

  • head: this command prints the specified number of bytes to standard out.
  • -c 2048 This is the number of bytes to print out.
  • /dev/zero The file to print out
  • &gt Redirect standard output to another standard in.
  • /dev/sda2 This is the file we want to direct the output to. This is the block device that represents the partition we are going to encrypt.
  • ; Separates multiple commands and tells bash to wait until the preceding command returns before executing the next.
  • sync Force the write buffer to flush which makes sure the data has been written to the physical disk.

Using dd

dd if=/dev/zero of=/dev/sda2 bs=512 count=4
  • if=/dev/zero The input file is going to be /dev/zero
  • of=/dev/sda2 The output file is going to be /dev/sda2. This will be the partition we're encrypting.
  • bs=512 This is the block size we're writing out.
  • count=4 This is the number for blocks we're writing. In total we're writing 4 * 512 bytes (2MB)

Formatting and Encrypting the Partition

cryptsetup luksFormat --verbose -c aes-xts-plain64 -s 512 /dev/sda2 /media/myusbkey/keyfile.enc

Command Breakdown

  • cryptsetup This is a convenience tool which makes encrypting and working with encrypted drives much easier.
  • luckFormat This is the type of format we're using for our encrypted partition.
  • --verbose Make the output verbose so we can see everything that's going on.
  • -c aes-xts-plain64 This sets what cypher to use when encrypting the partition. Note that the default cypher prior to version 1.6.0 is "aes-cbc-essiv" and is considered vulnerable to practical attacks and as such has been changed. To be safe we explicitly set to a secure cypher (Source: Arch Linux: Device Encryption).
  • -s 512 The key size that is used to encrypt the drive in bits. This number has to be a multiple of 8 and defaults to 256 if not specified.
  • /dev/sda2 The device to format
  • /media/myusbkey/keyfile.enc The keyfile we wish to use to unlock the device.

Mapping the Encrypted Partition

In order to mount the partition we have to use cryptsetup to open the drive and create a mapped device which we can treat as a regular block device to mount.
cryptsetup luksOpen --key-file /media/myusbkey/keyfile.enc /dev/sda2 cryptroot

Command Breakdown

  • luksOpen This option tells cryptsetup that we're opening an encrypted volume.
  • --key-file /media/myusbkey/keyfile.enc Tells cryptsetup where to find the keyfile.
  • /dev/sda2 The encrypted block device to decrypt.
  • cryptroot The name of the mapped device.

Formatting the Encrypted Partition

mkfs -t ext4 /dev/mapper/cryptroot

Command Breakdown

  • -t ext4 Specifies a file system type to format to. The default is ext2
  • /dev/mapper/cryptroot The device to format. You'll notice we're accessing the device via its mapped name under /dev/mapper/cryptroot instead of /deb/sda2

Mounting the Encrypted Partition/Filesystem

Create a folder that we will mount the filesystem on top of.
mkdir /media/encrypteddrive
Mount the filesystem via it's mapped name
mount -t ext4 /dev/mapper/cryptroot /media/encrypteddrive

Command Breakdown

  • -t ext4 The file system type of the device we're mounting.
  • /dev/mapper/cryptroot Again we're interfacing with the device via it's mapped name.
  • /media/encrypteddrive Where to mount it.

Get UUID of your USB Key

We will need this when setting up decryption at boot as we'll need to specify the location of the keyfile. The most consistent way to do it is by UUID. It also has the added benefit of requiring a specific device to have the keyfile.
blkid
The above command will usually take a few seconds to run and will return the UUID of all of the storage devices connected to the machine. If you're unsure as to which is your USB Key then run the command once without the key being plugged into the machine then again after plugging it in and finding the new device. If your USB key's partition has a label then it should be displayed next to the UUID after running the command making it easy to pick out the USB key.
Record the UUID of your USB Key somewhre; we will use it in a few steps.

Restoring the Root Partition

rsync -aAXv /path/to/backup/folder /media/encrypteddrive

Command Breakdown

Note you should take a look at the rsync man page for a complete breakdown of this command as I will only be giving a high level overview of it.
  • -a Archive mode: This is a short cut for the options -rlptgoD and in essence recurses through all sub directories and perservers things like ownership, permissions and symlinks making it useful for backups.
  • -A Preserves ACLs (Access Control Lists).
  • -X Preserves extended attributes.
  • -v Verbose output.
  • /path/to/backup/folder Where we have backed up the to.
  • /media/encrypteddrive Where we have the partition mounted.

chroot to Encrypted Root

This is a little tricky since we need to mount the current system's transient/device folders and our original boot partition to the encrypted system's root.
Navigate to the root of the encrypted drive.
cd /media/encrypteddrive
Mount the system's transient/device folders (replacing sda1 with the location of your boot partition that was created in the initial system setup):
mount -t ext4 /dev/sda1 boot/
mount -t proc proc proc/
mount -t sysfs sys sys/
mount -o bind /dev dev/
Change Root
chroot .

Add an Entry to the crypttab

Here you will define the encrypted device and how to access it. Add the following to the file:
cryptroot /dev/sda2 /dev/disk/by-uuid/AC257504DA15b214:/keyfile.enc cipher=aes-xts-plain64,size=512,hash=ripemd160,keyscript=/lib/cryptsetup/scripts/passdev
Where AC257504DA15b214 is the UUID of your USB Key. The rest of the parameters match those we used when using cryptsetup to create the LUKS partition with two exceptions: cryptroot and hash=ripemd160. cryptroot is the mapping name we will use when refering to the drive in our fstab, hash=ripemd160 is the default hash used when using cryptsetup but it has to be specified here explicitly.

Update fstab

We have to update fstab to point to the new mapped root. Update /etc/fstab with the following in place of its root entry:
/dev/mapper/cryptroot / ext4 defaults 0 1

Update initramfs

Run the folloing command
update-initramfs -u
This is a tool that will re-tool your boot process given your current configuration including setting up the auto discovery of your encryption key and decrypting your root partition

Reboot

If all went well you should now be able to reboot (with your Live CD/DVD/USB removed) and the system will boot into Debian as long as the specific USB key with the keyfile on it is plugged into the machine.

Sources

Wednesday, 9 April 2014

Python Boilerplate

What is it?

Basically, what it does is allow you to make your code modular by allowing you to put your execution code into a main function which is run when the file itself is run by the Python Interpreter. This means that you can then import this file in another without executing main code in order to make use of its methods.

The Code

The important part of the template is the final two lines. The call to main() will only be executed if this file was invoked by the Python interpreter.

Thursday, 20 March 2014

Learning Unity 02: Movement

A Little Frustration

I took a break from this project out of misplaced frustration (and increased work load). What triggered this frustration was a misunderstanding I had about how Unity GameObjects were intended to be used.
I had originally planned to have an inheritance structure between my GameObjects that went GameObject > Mobile > UnitMobile. However, when I went to implement the Interact (movement) action in my Mobile base class I had to create the inheritance. And to my annoyance Unity sealed the GameObject class which means it cannot be inherited. Bummer.
Coming back to the project several weeks later I decided to rework my design which led me to, at first, implementing a MobileInterfce interface which UnitMobile, extending MonoBehaviour, would implement. This had the benefit that MobileInterface could not be instantiated itself and would force UnitMobile to implement all methods that are necessary for it to work. However, this could lead to code duplication as I implement the Mobile interface for other classes. Then, thanks to a remark from a colleague, the idea of using Abstracts for the Behaviour was created. This satisfies all my requirements better than my original design. It allows me to implement base methods (such as Move) in the Mobile abstract class while requiring the extended class to implement all abstract methods (such as the InteractWith method) which must exist due to coupling between the UnitManager and UnitMobile classes. (I know coupling is usually a bad thing but direct method calls are far cheaper then a message system).
So, in summary, my misunderstanding was that I didn’t know Unity’s intent with the GameObject. Unity intends everything that exists in a scene to be just a GameObject. This means no casting or type checking is needed when working with GameObjects. When you need to interact with the behaviours/components of a GameObject you can get their instances from the GameObject directly. Then these behaviors/components are actually the driving pieces of the system and the game objects themselves are just, in essence, decoration as well as a container for instance objects (scripts and components).

Overview Of Changes

  • I recreated the base scene so I could be sure that it’s easy to to implement the ClickController and Managers.
  • I changed my naming prefix convention for my base empty GameObjects:
    • ~ Managers and Controllers
    • + GameObject, Cameras and Lights
  • Changed my approach to the Mobile/UnitMoible relationship. I now use an Abstract class for Mobile.
  • Added handling for interactions (default right click) into the UnitManager and Mobile classes.
    • The Unit manager will issue an Interact command to all selected UnitMobiles
    • The Unit manager will issue an InteractedWith command to any UnitMobile that was clicked when issuing the command.
  • Implemented a simple Move method in the Mobile class which sets a target to move to.
  • Implemented the FixedUpdate() method which will move toward the set target by looking at it and applying force to to move it forward. It will only do this if target is set and it is not within its set deadzone to that target.
  • I added the GNU Licence to all source files. I chose this specific license because I’d prefer people not to have people make money off of my work without my consent. The intent of this project is for me and others to learn and contribute.
  • Using a Mono profile I changed the coding standards to more conforme to my personal tastes. I included this profile if anyone else likes it and so I don’t have to recreate it if I format my system later.

Specifics

The Abstract Mobile


Here are the two defaults for speed and deadRange. For those, like me, who hate seeing Magic Numbers inside code I will be moving these to a cosntants solution once I find one that I like in Unity. Chances are this will be a manager class that extends MonoBehaviour so I can change the constants from Unity.

We want each mobile to handle an InteractWith action as it needs to exist to satisfy the coupling between the UnitManager UnitMobile

A Move is issued from the UnitManager and simply sets a target for each selected UnitMobile. The actual code that moves the unit is in FixedUpdate.

This is pretty straight forward. If target is set and we’re not too close to it (defined by deadRange) look at it and apply force forward.

UnitMobile


The InteractWith method must be implemented even if it’s just a stub as the UnitManager assumes it exists in all UnitMobiles.

UnitManager


If we have an Interact action (Right Click) tell our selected units and the GameObject that was clicked on.

Calls the Move method of each selected UnitMobile. Note that in the future I see this interaction being abstracted as the UnitMobile will have multiple responses to an Interact action depending on context (such as attack, patrol, repair, etc.)

We first get the UnitMobile compnent of the GameObject. If it did have a UnitMobile component then we call the InteractWith method on it, otherwise its not something we know how to deal with so we don’t do anything with it.

Other Notes

I’ve been thinking about the abstraction layer (mentioned in the note regarding the InteractWithSelectedUnits method mentioned above). My current thoughts are to create a Handler/Message system that is coupled with specific unit types. Then a Unit would subscribe (depending on the type of unit) to any handler that it can perform. The UnitManager would then tell the handlers to pass along the message to a specific UnitMobile and if that instance is subscribed to the handler the message will be passed. This removes the coupling between the UnitMobile and the UnitManager but does create a coupling between these handlers and their specific unit types. I’ll have to sit down and work though a few use cases but in the mean time if anyone who reads this has feedback about it leave it below.

[Update] Next Steps: Simple Pathfinding

The next step is to implement a primitive pathfinding system. I have looked into several boxed versions of AI and Pathfinding implementations as well as the premium NavMesh feature of Unity.
I’m leaning toward implementing this myself for several reasons:
  • I’ll have full control to implement and modify this however I want.
  • It’ll be free.
  • It’ll be fun! And I’ll get to learn a thing or two about Game AI.
Over the next few days I will be researching Game AI both with Unity and in general to get an idea of a good way to implement this. My current thought is to make either an intelligent world (ie. terrain or plane)or an overlapping intelligent mesh that will define accessible paths.

Source

I’ve tagged the source at v0.2 and it can be obtained here: Bitbucket: Unity Strategy Game Development v0.2

Wednesday, 19 February 2014

Java Keystores: A Quick Reference

Overview

One of the biggest headaches I have to deal with are web services with self signed SSL certificates for web services. This is a common issue for applications in development because development and preview environments rarely have properly signed certificates. So in order to debug and develop you have to get around this.

Get the Certificate

Perhaps this is the lazy way of getting a certificate for a self signed web service but it does work and it's fairly easy. You simply use your browser (Chrome or Firefox).
  1. Navigate to your WSDL
  2. Click the lock icon in the navigation bar to the left of the URL
  3. If you're using Chrome (See the screen shot below for an example):
    1. Click the Connection Tab
    2. Click the Certificate Information
    3. Click Details
    4. Click Export
  4. If you're using Firefox:
    1. Click More Information
    2. Click View Certificate
    3. Click Details
    4. Click Export
  5. Select a location you can easily access and click Save
Exporting an SSL certificate in Chrome
Exporting a SSL certificate in Chrome

Add the certificate to your Keystore

  1. Open a terminal and navigate to where you saved the certificate and execute the following command:
  2. keytool -import -trustcacerts -alias myAlias -file mycert.ca --keystore mykeystore.jks. There are several things to note here:
    • -trustcacerts will probably not help but its a nice to have. Basically if the root certificate is already in your cacerts file located at jdk/jre/lib/security/cacerts you wont be pestered about accepting this certificate via a yes/no prompt.
    • -alias this is a unique name that identifies the certificate. Usually the domain name the certificate belongs to is a good choice here.
    • -file this is the certificate file you downloaded via your browser
    • --keystore (optional) this is the keystore you're going to store the certificate in. If it doesn't already exist it will be created. If this option is left out the user's keystore will be used.
  3. You will be prompted for the keystore password. By default this is changeit if you left the --keystore option off. If you're creating a new keystore this will be its password.
  4. If asked about importing the certificate type "yes" and hit enter.

Using a non-default keystore in Java

This is done by including 3 JVM options when running your java application:
  • Dsun.security.ssl.allowUnsafeRenegotiation=true 
  • Djavax.net.ssl.trustStore=/path/to/your/keystore
  • Djavax.net.ssl.trustStorePassword=changeit
For example executing an application with a custom keystore would look like this:
java -Dsun.security.ssl.allowUnsafeRenegotiation=true -Djavax.net.ssl.trustStore=/home/bradley/liferay-preview-keystore -Djavax.net.ssl.trustStorePassword=changeit -jar myapp.jar

A Trick When Working with Multiple Environments

Since it's better to develop and debug with an environment that mimics your preview and production environments I've taken to copying the java keystore in our preview environments, which is  managed by our middle ware team and/or application admins, to my local machine and have my local environments/applications use it instead of importing certificates manually. This file is located at <JDK>/jre/lib/security/cacerts (In my case the full path is:  /usr/lib/jvm/java-7-oracle/jre/lib/security/cacerts). What this accomplishes is guaranteeing that my environment has the same certificates as our preview instance. Couple this with setting up an SSH tunnel to our preview instance for the application running locally and development and migration between dev and preview environments for applications that consume a web service becomes trivial.

A Tool For Working with Web Services

A critical tool for debugging and setting up a web service/client is SoapUI. This application allows you to load a WSDL, either remotely or locally, and it will generate a SOAP envelope that you can use to query the web service. The results of the query are then returned and displayed in SoapUI.

A word of caution: I've had a lot of trouble getting SoapUI to use keystores I've set up via the GUI. If you're running into this problem you can use the instructions in the "Using a non-default keystore in Java" section and add the JVM options to SoapUI/bin/soapui.sh. Once I did this my SSL certificates were recognized (even though the exact same keystore was set up in SoapUI's preferences and my project) and everything worked perfectly.

Running an SSH tunnel using PUTTY

Why would you want to do this?

For many reasons. Mostly I do this when I need to appear to be a server in order to test a web service, network connection, or some other resource where access is restricted to a set of IP Addresses. This will let me use tools such as SoapUI and Firefox to debug/test things.

What is Putty

It's a cross platform SSH client that saves sessions and makes using SSH just a little easier and more convent. SSH can easily be done from a shell but the convince of having pre-defined sessions with things like tunnelling all ready configured makes Putty win out in my books. 

The basic usage of putty itself is outside the scope of this article but below are links as to where it can be obtained and a documentation:

What To Do

  1. Start up Putty
  2. Fill out your session details
  3. Go to Connection -> SSH -> Tunnels
  4. Set up your tunnel configuration (See image below)
    1. Enter your source port number (This is the port local clients will connect to)
    2. Select Dynamic
    3. Click Add
  5. Go back to Session and save your session
  6. Click Open
Putty Tunnel Configuration
Putty Tunnel Configuration

Connecting To Your Proxy

Each program is different but here are my two most common methods/use cases:

Firefox

  1. Edit -> Preferences -> Advanced -> Settings
  2. Select Manual proxy configuration
    1. Set HTTP Proxy to 127.0.0.1 and Port to 1080 (or what ever port your defined)
    2. Ensure SOCKS v5 is selected
    3. Click OK

Java Applicaitons

Run the Java application with the following java options set:
Dhttp.proxyHost=127.0.0.1 Dhttp.proxyPort=8010
For example executing the application via command line may look like:
java -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8010 -jar myApplication.jar

Saturday, 1 February 2014

Setting up OpenVPN on a Virtual Private Server (VPS)

Overview

I recently set up my own VPS. I had installed OpenVPN on my own virtual machines and systems before but setting it up on a VPS created a few unexpected problems. Below is the process that I used to get things up and working.

Assumptions and Out of Scope Items

  • You have OpenVPN already installed at /etc/openvpn
  • You have Easy-RSA 2.0 located at /etc/openvpn/easy-rsa/
  • You have root access (I'm going to assume you're either logged in as root or running under su)
  • Your VPS has provisioned you a Static IP address.
  • OpenVPN Client instillation and configuration.

Special VPS Considerations

Note: A VPS typically operates with a shared kernel between all of it's Guests. Since a VPS operates on the hardware level (unlike a regular virtual machine which is typically hosted by a piece of software like Virtual Box or VMWare) the hardware/kernel level is shared among all systems. You can read more on this on Wikipedia: Virtual Machine.

Because of this we run into two main issues:
  1. OpenVPN requires the Tun kernel module which isn't usually present by default. And since you don't have access to the kernel you can't load it. To get around this you can usually ask your provider to enable it on your VPS or it is an option on your provider's VPS management page.
  2. In order to tunnel traffic you need to set up routing rules with iptables which, in my experience, is typically done using the MASQUERADE option which isn't supported by my VPS Host (OpenVZ). I would assume this is the case for most if not all VPS hosts.

The Process

Creating RSA Keys

If you can use encryption keys to access your VPN server why not? Unless you're making this service open to a large number of people the standard Username/Password approach just isn't as secure. And since, in this case, I can easily control and distribute my private key(s) I can't see a reason not to.
  1. Navigate to /etc/openvpn/easy-rsa/2.0/ and create a directory called keys
  2. Edit the /etc/openvpn/easy-rsa/2.0/vars file and set the following values. Note: These will be your default values when generating keys so you will get an opportunity to override them during the process.
    export KEY_COUNTRY="US"        # Your Country
    export KEY_PROVINCE="CA"       # Your State/Province/Territory 
    export KEY_CITY="SanFrancisco" # Your City
    export KEY_ORG="None"          # Your Organization Name
    export KEY_EMAIL="mail@domain" # Mail Address
    export KEY_EMAIL=mail@domain   # Mail Address
    Note: You can also add/alter the additional fields below, however, typically these values will not be the default when you generate your keys.
    export KEY_CN=OpenVPN.yourdomain.com # Common Name
    export KEY_NAME=yourname             # Your Name
    export KEY_OU=servername             # Organizational Unit
  3. Execute the following commands from the /etc/openvpn/easy-rsa/2.0/ directory. Note: You will be prompted for information when generating keys and this will create all your keys and certs in the /etc/openvpn/easy-rsa/2.0/keys folder:
    ./vars            # Sets up environment variables for key creation
    ./clean-all       # Cleans up any generated files in the folder
    ./build-ca        # Creates the certificate authority key
    ./build-key-server server # Creates the server key
    ./build-dh        # Creates the Diffie-Hellman key
    ./build-key clientname # Creates the client's private key
  4. Transfer, physically if possible encrypted if not, the  ca.crt, clientname.crt, clientname.key files to your client and place them in it's openvpn folder.

Creating a TLS-Auth key (ta.key)

  1. Navigate to /etc/openvpn/
  2. run the command:
    openvpn --genkey --secret ta.key
  3. Done.

Configuring the Server (server.conf)

To make this a little simpler I've included a sample server.conf file. I'd recommend backing up your current server.conf file as it has comments that will help you further configure your server to your needs.

Set up routing/forwarding

  1. We need to enable ip traffic forwarding:
    1. Enable forwarding by executing the following:
      echo 1 > /proc/sys/net/ipv4/ip_forward
    2. Make the change permanent by editing /etc/sysctl.conf and un-commenting the line:
       net.ipv4.ip_forward = 1
  2. Set up IPTables rules to forward the traffic coming in from the tunnel (tun0) to our ethernet adapter (venet0). This is done by setting up forwarding between the client and server's network adapters (tun0 and venet0) as well as traffic coming back to the client (venet0 to tun0). The final step maps the incomming local 10.8.0.0/24 address to the servers IP (This is the step that differs on a VPS).
    1. Enter the following into the command line:
      iptables -A FORWARD -i venet0 -o tun0 -m state --state ESTABLISHED,RELATED -j ACCEPT
      iptables -A FORWARD -s 10.8.0.0/24 -o venet0 -j ACCEPT
    2. The following is the standard way of setting up the ip mapping which uses MASQUERADE but will not work on a VPS
      iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
      Because MASQUERADE isn't implemented in the VPS's kernel we have to complete the mapping manually. Keep in mind that as long as you have a static IP this method will work.
      iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j SNAT --to-source <server ip>
  3. Make the changes to iptables persist between reboots. I found some tools to help out with this but I tend to prefer solutions that don't require additional packges. And this one is simple so why not!
    1. Store the current state of your iptables in a iptables rule file.
      iptables-save > /etc/iptables.openvpn.rules
    2. Add the following to to /etc/network/interfaces.
      #Define OpenVPN IP address forwarding
      post-up /sbin/iptables-restore < /etc/iptables.openvpn.rules
      

Sources

Tuesday, 28 January 2014

Learning Unity 01.1: Source Code

Bitbucket

Why keep your source on Bitbucket and not GitHub you might ask. Well that answer is simple; GitHub is a little bit greedy and requires you to pay for private repositories even if you aren't a corporate user. Bitbucket does not and has all the same functionality as GitHub so I keep all my repositories there.

Where?

You can access the Git repository here:
https://bitbucket.org/bbertrim/unity-strategy-game-development/

The code still hasn't been completely cleaned up but it is in a working state. I plan on finishing up my clean up tomorrow and tagging it to match the "Learning Unity 01" post. I will be updating the post with a link to that tag and links to specific files within. I might throw a few screen shots in for flavor as well.

Sunday, 26 January 2014

Learning Unity 01: Unit Selection

Overview

I'm going to talk about the clicking feature I just implemented using Unity as the first step in creating my game. Also code will be made available on my bitbucket account as soon as I get it cleaned up and this project under version control.

[UPDATE]
The source code has been pushed and tagged. You can find the relevant code to this post at: My Bitbucket Repository.

I've also went back through the post and linked to code and added a screen shot of my Logging Manager.

The Design

The Common Approach

Generally the process for capturing clicks and selecting units is described in the following Sequence Diagram (Note: Yes it is not proper UML but you'll get the idea and I've also included some custom annotations):
Sequence Diagram of a Common Selection Implementation
Figure 1: Common Selection Implementation 

Classes we need to code:

  • ClickController:
    • This class inherits from MonoBehaviour. This allows it to make use of the base class's Start, Update, Awake, OnEventMethods, etc. methods which tie us into the main process loop and events in the game. See its scripting reference.
    • Note that Unity does not allow us to instantiate any class extending MonoBehaviour in code. We have to attach them to GameObjects and Unity will handle their initialization. So to keep things simple I would suggest you create a single empty game object to hold all of your instanced MonoBehaviour classes (I call mine ~InstanceManager). This way you don't have scripts attached to random objects (such as your main camera) just because you have to in order to use it.
  • UnitManager: 
    • The UnitManager is created using a singleton design pattern so that we don't have to initialize it ourselves and that there is only ever one instance. (See Wikipedia: Singleton Pattern).
  • Unit
    • This is the MonoBehaviour that will be attached to all of our units. This allows us to add additional capabilities (via methods) and properties (via members) to the Unit such as movement, speed, hitpoints, etc. 

Following the sequence

  1. The user first clicks some where on the screen. The click controller is watching the input on the mouse button and if it is clicked ever clicked...
  2. It uses Unity's SendMessage capabilities to...
  3. Call a handler defined in the Unit class (LeftMouseButton())
  4. The LeftMouseButton method gets and instance of the UnitManager
  5. Calls the SelectUnit method on the instance which adds the unit to a list of selected units

Thoughts

Although the concept behind the process is sound I do find that this approach has a few flaws:
  1. It's does not Separate Concerns well.
    • In order to make project organized, scalable, modular and maintainable having good Separation of Concerns is crucial.
    • When implementing future features, such as multi-select, deselection, group move commands, patrol commands the code will be messy and you'll have to hunt for where exactly a specific feature is implemented since it could be the Unit handling it or the UnitManager.
  2. Performance
    • This method uses the unit it self to pass along the message that it was clicked and is called using a message system instead of a direct call. The compiler may be smart enough to see the 1:1 mapping to another call in the Unit but the use of a SendMessage method is worrisome. Since them method name is passed as a string Reflection will have to be used which is very expensive.
    • (Ties into the Separation of Concerns issue) why does the Unit have to be involved with the selection process? All a unit should be doing is unit-esq stuff such as moving, shooting an dying. It's the responsibility of the unit manage to manage the units so don't waste the CPU cycles calling the unit or confusion of code location.
  3. Potential for duplication of code
    • More then just units will be clicked on and not just units will move so why not have a base type to implement the shared features and just extend it.
  4. Coupling between the Unity Engine and User code.
    • To make code more readable and, even though it's moot in this case, more modular between platforms. An abstraction layer should be at least partially implemented which maps unity and system methods/properties (such as buttons and keycodes) to actions. You should never see methods named LeftClick() or references to KeyCode outside of the mapping or interactions directly with Unity.
  5. Magic Numbers
    • Although its not shown in the diagram I found an excessive use of constants inside the code, specially with mouse buttons (0=left, 1=right, 2=middle).

My Approach

Magic Numbers and Mapping

I defined two enumerated types to deal with the magic number issues as well as for mapping purposes:
  • MouseButton
    • Values: Left, Right, Middle
  • Action - Maps actions to mouse buttons
    • Values: Select, Interact, Manipulate
Explanation: To begin with the user is going to be able to do three things with units: Select them, Interact with them, and Manipulate them. Now using a mapping method I can define what mouse click will execute which action.

You can see how I handle the mapping in my UnitManager.cs file. The main entry point is at the ClickNotification() method. Following the logic there you can see how the mapping is done with the enumerations which occurs in the MapMouseButtonToActionHandler() method.

Logging

Figure 2: Log Manager Script in the Inspector
Coming from a Java background I found that the lack of logging flexibility within Unity a little frustrating. Specifically adjusting log levels for specific classes. So I wrapped the Unity logger inside my own logging class and am adding this functionality to it.

Figure 3 to the right my Log Manager script attached to my ~Managers object. This allows me to adjust the default log level at run time as well as turn logging on and off for specific classes.

Base Mobile class

Any object that acts like a mobile (can moved, die, etc...) will be derived from this class. Currently this has no impact but will be utilized when I add unit movement.

New Sequence Diagram

My Select Implantation
Figure 3: My Select Implantation
You'll notice that work is being confined to the UnitManager and the ClickController is just notifying the UnitManager that a click occurs and passes along all pertinent information such as the GameObject clicked, location in global space and any key modifiers pressed. Then it's the UnitManager's responsibility to do what it needs to do. This time it checks its mapping for the mouse button pressed (which maps to select) and calls the SelectActionHandler which in turn decides that we're selecting a unit and calls SelectUnit.

At this point I have multi-select, individual selection and deselection and single selection working. But for all of these you'll note that the additional functionality doesn't change the sequence of events, only the actions called at the end. For instance if you are holding Ctrl and click a selected unit the SelectionActionHandler would call DeselectUnit instead of SelectUnit.

Next Steps

Get my code under version control and share it! Then I plan to implement movement and box selections.

Saturday, 25 January 2014

Learning Unity 00: Introduction

What is Unity?

Unity is a game engine and development platform. It comes bundled with Mono Develop as a programming IDE. It has a pretty small learning curve to the point where even those with little to no programming experience can build something with it.

Why learn it?

Well I've had the desire to create a game for 10 or more years now. It all started with Ultima Online and a RunUO powered shard. It was my first taste of crafting a world with programming and I never forgot it. RunUO is a reverse engineered server implementation for Ultima Online which is written in C# and is very easy to modify and extend . Topping that off I have a background in art and I like to write/read fantasy/sci-fi. But most importantly; I'm a programmer with 6 years of training (two diplomas, one in Computer Engineering & Software Development and the other for Computer Analyst) and 4 years of professional experience.

Why Unity?

Mainly because it's free and all the hard work is done for you leaving the fun stuff for me to do. And since the engine it self is maintained by a third party I don't have to support it. I've also looked into other possibilities:

  • OGRE: I used it for a week until I decided it wasn't worth the effort. OGRE is a pretty good rendering engine but you have to create the rest of the game engine yourself and frankly I don't have that much free time.
  • Unreal Engine: Was my second look and I was intrigued but then this project faded from view for a while and I never really gave it too much of a try.
  • Unity: Where I work, Queen's University, they offer a Game Design course in Computer Science. We had a student do placement in my office and he told me about Unity and I was intrigued. I got back into the game design mood and did a lot of research on Unity and there is a lot of documentation to be found and its very easy to get into. My real enjoyment of it came from its scripting which is done in C#, javascript or boo (of course being a java developer and having a lot of experience with C# I chose C#). The scripting in Unity is great, there are absolutely no limits in what you can do and a lot can be done in a very short amount of time.
  • Unreal Engine: This time I took a more serious look at the Unreal Development Kit (UDK) and based on videos and content coming from Epic it's the next Holy Grail. To be honest; the Videos Epic put out showing off its features are mouth watering. However my first twinge of doubt was when they showed off their Kismet Visual Scripting System. Don't get me wrong its pretty damn neat, however, I keep thinking why? When you're writing code you visualize this data in a similar way in your head and can write it 10 times faster then point and clicking in a GUI then connecting inputs and output with a bezier. In the end I came to the conclusion that UDK may be able to generate a prettier game but Unity is just going to be more fun developing in and I feel a lot more flexible with what I can create.
  • Cryengine: All I can say is, “Beautiful”. This engine is absolutely stunning. However it's meant for a 1st/3rd person game and in my initial dive into the game development world I am intending to make a RTS/TBS game. From the examples and videos of RTS games created using Cryengine it just seems to be lacking. Sure the visuals still look nice but it's still, in essence, a first or third person game looking straight down.

Other Plans

I had played around with the idea of doing a video series on Youtube of the development process however I'm not sure who would bother watching it and as such wouldn't be worth the effort. However, if people do end up following this blog and they seem interested I may change my mind.

What's Next?

My first goal is to come up with basic unit selection, management and movement. My next post will detail my design and go over my implementation and my reasons why I did what I did.

Hibernate configuration and setup in a web applications

Overview

The steps are: create a Hibernate configuration file, create a SessionFactory with it and get a Session.
This is usually done during the loading of the web application context so that database verification/update/creation will be done upon the application's deployment instead of on first use.

Hibernate.cfg.xml

This file defines your data source, entities and configuration. This file “should” be placed in your application's root build path for ease of setup. However, if you have need to, it can be placed anywhere within you application's build path. I would recommend you keep it in the root of your build path unless you have a reason not to as you will need to reference the path when creating your SessionFactory. Below are a couple examples of a Hibernate.cfg.xml file.
For connecting directly to the database:
For connecting to a data pool managed by the application server:
Note: You'll notice that we're using “thread” as our session context class. This is the recommended way of using Hibernate within web applications. This means that we are going to have our session attach to the current running thread instead of managing it our self within the servlet life cycle. The implication is that when we use sessionFactory.getCurrentSession() we are getting the session attached to the thread which will automatically close when we do a commit or role back of a transaction. Since a standard unit of work maps well to the lifecycle of a servlet this works very well. We can call the sessionFactory via our singleton in HibernateUtil (see next session for more information) at any time to get the current session to do work without the need of passing it around. You will notice that I do pass the session object around in my examples and I don't really need to. However, I started off doing it that and I'd prefer to just stay consistent (and not have to update my prefab classes). Perhaps next time I have a big project I will redo my prefab method signatures and remove it.

HibernateUtil

The standard practice for interacting with a Hibernate Session is to have the a utility class keep the SessionFactory as a singleton.
In my case I chose to have the Singleton initialized within a static code block so that it will be instantiated when the class is loaded (which should be when you deploy the application). However, since this may differ from application server to application server I will also create a listener that will be called when the application context is registered. I know the listener makes the static code block initialization for the singleton moot but I like the non-standard singleton design pattern.
You'll also note the joinTransaction() helper method at the end. I find I don't often use a framework that does transaction management since I don't often write large applications and prefer a lighter, or no, framework. This method allows me to chain methods calls together to avoid duplicate code without the need of worrying if there is an existing transaction. Below is an example of how I use this method:
You'll notice that each utility method can be called on its own and complete its task but also methods that require another utility method can use them and the wont execute a commit if they weren't the method that was first called. This can be taken a step further if you have a more complex business layer which could manage the transaction it self.

HibernateListener

Next we create our HibernateListener class. This class is responsible for getting the session factory when the Context is initialized and closing it when the context is destroyed.

Sources

https://community.jboss.org/wiki/Sessionsandtransactions

Friday, 24 January 2014

Tomcat connection pools and Hibernate

Overview

Tomcat accomplishes connections pooling by defining a resource for a given context. Then that resource is referenced within a web application's web.xml which will cause the Tomcat Resource Factory to create the requested resource and it will be accessible to that web application.

Defining Resources

Resources can be defined in either of the following locations:

Global Context

tomcat/conf/context.xml
All resources defined here can be referenced in any web application.

Context Specific

tomcat/conf/[enginename]/[hostname]/[appcontext].xml
[enginename] This will be Catalina
[hostname] This will usually be localhost
[appcontext] This will be the directory name the application is deployed to under the tomcat/webapp directory.

Following the above directory structure and naming convention allows you define a context and resource that can only be accessed by a specific application. An example would be of this would be tomcat/Catalina/localhost/MyExampleApp.xml

Below is an example of context.xml with a resource defined within. You can also take this example and remove WatchedResource element and use it for a context specific resource.


Web Application Setup

To allow access to the resource you must make a reference to it in your applications web.xml

Hibernate Setup

To configure hibernate to use the resource you reference it using its full JNDI name in the <property name="hibernate.connection.datasource"> tag

Sources:

http://wiki.apache.org/tomcat/TomcatHibernate
http://tomcat.apache.org/tomcat-7.0-doc/jndi-resources-howto.html
http://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-examples-howto.html