Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.terraform
84 changes: 46 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,66 @@
# Wave Operations Engineering Development Challenge
# Overview

Applicants for the [Operations Engineering team](https://wave.bamboohr.co.uk/jobs/) at Wave must complete the following challenge, and submit a solution prior to the interviewing process.
This repo has two parts:

The purpose of this exercise is to create something that we can discuss during the on-site interview, and that's representative of the kind of things we do here on a daily basis.
1. The packer dir. It has everything to make an AWS EC2 image that has the app "baked in". The product of this dir is an AMI.

There isn't a hard deadline for this exercise; take as long as you need to complete it. However, in terms of total time spent actively working on the challenge, we ask that you not spend more than a few hours, as we value your time and expect to leave things open to discussion in the on-site interview.
2. the terraform dir . This is where some infrastucture gets lit up running that app. At this time it's just a lone vpc-less intance.

Send your completed submission to your contact at Wave. Feel free to email [[email protected]]([email protected]) if you have any questions.
I tried to keep this as simple as I could whle being functional. I also thorugh about security, usablity , and scalability

## Submission Instructions
aws.sh populates your environment with your aws creds.

1. Fork this project on GitHub - you'll need to create an account if you don't already have one
1. Complete the project as described below within your fork
1. Push all of your changes to your fork on GitHub and submit a pull request
1. Email your contact at Wave to let them know you have submitted a solution, and make sure to include your GitHub username in your email (so we can match applicants with pull requests)
# Packer

## Alternate Submission Instructions (if you don't want to publicize completing the challenge)
There are three parts that make this up.

1. Clone the repository
1. Complete your project as described below within your local repository
1. Email a patch file to your contact at Wave
image.json
the packer template
references the two files before.

## Project Description
provision.sh
a script run at ami build time that gets the app installed.
supprting software install is in this file.

There's a basic Python app available [here](https://github.com/wvchallenges/opseng-challenge-app). Your task is to host this app on AWS, using the current `HEAD` of the `master` branch as of when we test your submission.
rc.local
a start up script that ensures that the services is running as the instance comes up.
startup options are in this file.

The OS used for hosting, and the tools & techniques used to accomplish this are up to you. Once you're done, please submit a paragraph or two in your `README` about what you're particularly proud of in your implementation, and why. Be deliberate in your choices and design, as we'll use them as a starting point for our discussions.
# Terraform

### Deliverables
Just a lone instance at this time.

You should provide at least an executable bash script called `aws-app.sh`. You're welcome to include other files and install/use other tools in your repo as needed, but `aws-app.sh` is what we'll run to test your submission (see the evaluation section).
You need to make your own key pair and specify it in terraform.tfvars.

#### Notes
In tfvars you also need to enter the ami id of the ami created by packer above.

* **Do not check AWS keys or any other secret credentials into git**
* Prefix all of your AWS resources (when possible) with your first name (example: joanne.domain.com)

## Evaluation
# Pre-requisites and Requirements

We'll do the following, using on a stock OSX machine with Python 2.7.10 or higher (but <3.0), the `awscli` Python package installed, and appropriate AWS environment variables set:
```
$ git clone <your username>/<repo name> # Or we'll apply your patch file to a checked-out branch
$ cd <repo name>
$ ./aws-app.sh
```
We expect that this will output a URL, and we'll then visit that URL to confirm it has the output generated by the current `HEAD` of the `master` branch of the repo linked to above.
1. put your aws api keys into pass in: ( https://www.passwordstore.org/ )

When we're evaluating your submission, some of the questions we'll be asking are:
pass opseng-challenge/access-key
pass opseng-challenge/secret-key

2. packer from hashicorp ( https://www.packer.io/ )

3. terraform from hashicorp ( https://www.terraform.io/ )

# To Do

Security
Auto Scaling
Try making this a docker image and using AWS ECS.

# Bugs

A temporary ssh key was committed to the repo and has since been removed form the repo and the cloud itself.

The terraform data lookup for the ami will choose the latest image, you might want more granular control.

Deployment process. There is no elegant way to get a new version deployed.
At this time the process would be:
1. rerun packer to make a new image.
2. rerun terraform for it to see the more recent ami.

* If we follow the steps above, do we end up with a working app at the URL specified?
* Does the working app reflect what's at the `HEAD` of the `master` branch right now, or at a point in the past?
* If we wanted to push out an updated version of the app's code, how much work would that be?
* Which application(s) and OS were chosen to host the app, and why?
* Which hosting strategy was selected, and did you have a good reason to pick that one?
* Are the decisions and strengths/weaknesses of this strategy discussed?
* How much of the hosting infrastructure is created when calling `aws-app.sh`, and how much does the script assume already exists or is created by hand in the console?

12 changes: 12 additions & 0 deletions aws-app.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/sh

owd=`pwd`
cd packer
packer build image.json
cd ../terraform
terraform init
terraform get
terraform apply

# Bugs: no error checking.

10 changes: 10 additions & 0 deletions aws.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/sh

export TF_VAR_aws_access_key="$(pass opseng-challenge/access-key)"
export TF_VAR_aws_secret_key="$(pass opseng-challenge/secret-key)"

export AWS_ACCESS_KEY_ID="${TF_VAR_aws_access_key}"
export AWS_SECRET_ACCESS_KEY="${TF_VAR_aws_secret_key}"

export TF_date="$(date +%Y%m%d%k%M%S)"

35 changes: 35 additions & 0 deletions packer/image.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"variables": {
"aws_access_key": "{{env `TF_VAR_aws_access_key`}}",
"aws_secret_key": "{{env `TF_VAR_aws_secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "opseng-challenge-app {{timestamp}}"
}],
"provisioners": [
{
"type": "file",
"source": "./rc.local",
"destination": "/home/ubuntu/rc.local"
},
{
"type": "shell",
"script": "./provision.sh"
}
]
}
14 changes: 14 additions & 0 deletions packer/provision.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/sh

sudo apt-get update
sudo apt-get install -y git python-pip
mkdir deploy
cd deploy
git clone https://github.com/wvchallenges/opseng-challenge-app.git
cd opseng-challenge-app
sudo pip install -r requirements.txt
# gunicorn app:app --bind 0.0.0.0:8000

sudo cp /home/ubuntu/rc.local /etc/rc.local


5 changes: 5 additions & 0 deletions packer/rc.local
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash

cd /home/ubuntu/deploy/opseng-challenge-app
gunicorn app:app --bind 0.0.0.0:8000

163 changes: 163 additions & 0 deletions terraform/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
variable "project" {}
variable "env" {}
variable "key_name" {}
variable "vpc_cidr" {}
variable "cidrs" {}
variable "azs" {}

provider "aws" {
region = "us-east-1"
}

data "aws_ami" "appimage" {
most_recent = true
owners = ["505545132866"]
filter {
name = "name"
values = ["opseng-challenge-app*"]
}
}

resource "aws_vpc" "main" {
cidr_block = "${var.vpc_cidr}"

lifecycle {
create_before_destroy = true
}

tags {
Name = "${var.project}-${var.env}-vpc"
}
}

resource "aws_security_group" "allow_all" {
name = "allow_all"
description = "Allow all inbound traffic"
vpc_id = "${aws_vpc.main.id}"

ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_internet_gateway" "igw" {
vpc_id = "${aws_vpc.main.id}"
}

resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "${element(split(",", var.cidrs), count.index)}"
availability_zone = "${element(split(",", var.azs), count.index)}"
count = "${length(split(",", var.cidrs))}"

tags {
Name = "public_subnet"
}

lifecycle {
create_before_destroy = true
}

map_public_ip_on_launch = true

tags {
Name = "${var.project}-${var.env}-subnet"
}
}

resource "aws_route_table" "public" {
vpc_id = "${aws_vpc.main.id}"

route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.igw.id}"
}

lifecycle {
create_before_destroy = true
ignore_changes = ["route"]
}

tags {
Name = "${var.project}-${var.env}-${element(split(",", var.azs), count.index)}"
}
}

resource "aws_route_table_association" "public_assoc" {
count = "${length(split(",", var.cidrs))}"
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
route_table_id = "${aws_route_table.public.id}"
}

resource "aws_instance" "web" {
count = 1
ami = "${data.aws_ami.appimage.id}"
instance_type = "t2.micro"
key_name = "dthornton"
availability_zone = "us-east-1a"
security_groups = ["${aws_security_group.allow_all.id}"]
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"

lifecycle {
create_before_destroy = true
}

tags {
Name = "${var.project}-${var.env}-${count.index}"
}

}

/*
resource "aws_lb_target_group" "target_group" {
name = "${var.project}-${var.env}-tg"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.main.id}"
}
*/

/*
resource "aws_placement_group" "placementgroup" {
name = "${var.project}-${var.env}-pg"
strategy = "cluster"
}
*/

/*
resource "aws_launch_configuration" "launchconfig" {
name = "${var.project}-${var.env}-launch-config"
image_id = "${data.aws_ami.appimage.id}"
instance_type = "t2.micro"
key_name = "dthornton"
}
*/

/*
resource "aws_autoscaling_group" "asg" {
availability_zones = ["us-east-1a","us-east-1b"]
name = "${var.project}-${var.env}"
max_size = 5
min_size = 2
health_check_grace_period = 300
health_check_type = "ELB"
desired_capacity = 4
force_delete = true
placement_group = "${aws_placement_group.placementgroup.id}"
launch_configuration = "${aws_launch_configuration.launchconfig.name}"
}
*/

output "url" {
value = "http://${aws_instance.web.public_ip}:8000"
}
Loading