Thursday, January 11, 2018

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructure. This can be automated either fully or partially with the help of simple AWS Lambda functions.

Example 1: Checking a Dynamic DNS IP and replacing it in an EC2 security group

This scenario arises when you have a user without a static IP. They can still get a Dynamic DNS name and have it automatically point to their local dynamic IP. You can check for that name periodically, and update the appropriate rules within EC2 security group(s).

Here is an AWS Lambda function named UpdateSecurityGroupWithHomeIP and written in Python 2.7 that achieves this goal:

import boto3
import hashlib
import json
import copy
import urllib2

# ID of the security group we want to update
SECURITY_GROUP_ID = "sg-XXXX"

# Description of the security rule we want to replace
SECURITY_RULE_DESCR = "My Home IP"

def lambda_handler(event, context):
    new_ip_address = list(event.values())[0]
    result = update_security_group(new_ip_address)
    return result

def update_security_group(new_ip_address):
    client = boto3.client('ec2')
    response = client.describe_security_groups(GroupIds=[SECURITY_GROUP_ID])
    group = response['SecurityGroups'][0]
    for permission in group['IpPermissions']:
        new_permission = copy.deepcopy(permission)
        ip_ranges = new_permission['IpRanges']
        for ip_range in ip_ranges:
            if ip_range['Description'] == 'My Home IP':
                ip_range['CidrIp'] = "%s/32" % new_ip_address
        client.revoke_security_group_ingress(GroupId=group['GroupId'], IpPermissions=[permission])
        client.authorize_security_group_ingress(GroupId=group['GroupId'], IpPermissions=[new_permission])
        
    return ""


A few observations:
  • it’s not trivial to do DNS lookups within Lambda, so I preferred to do the DNS lookup in the caller, and pass the resulting IP address as the sole argument to the above Lambda function — which is retrieved as new_ip_address in the lambda_handler function
  • in the update_security_group function I iterate through all permission objects in the IpPermissions list associated to the given security group and I create a deep copy of every permission
  • if any IP range in a permission object has the description “My Home IP”, I change its CidrIp property to the CIDR block corresponding to new_ip_address
  • finally, I revoke the old permission and authorize the new (deep-copied) permission

This Lambda function needs the proper permissions to modify security groups in EC2. I associated it with an IAM role which allows that. Here is the policy associated with that role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeSecurityGroups",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:RevokeSecurityGroupIngress"
            ],
            "Resource": "*"
        }
    ]
}


I call this Lambda function from a Jenkins job set to run periodically which does the DNS lookup first, then calls the above AWS Lambda function:

IPADDRESS=`dig my.homeip.example.com | grep IN | grep -v ';' | awk '{print $5}'`
aws lambda invoke \
--invocation-type RequestResponse \
--function-name UpdateSecurityGroupWithHomeIP \
--region us-west-2 \
--log-type Tail \
--payload "{\"ip\":\"$IPADDRESS\"}" \
outputfile.txt


Example 2: adding a new IP/CIDR block to a several security groups


This is useful when you have several security groups and you need to add a new source CIDR block to all of them.

Here is a Lambda function for this purpose:


import boto3
import hashlib
import json
import urllib2

# Ports your application uses that need inbound permissions from the service for
INGRESS_PORTS = { 
    'web' : [80, 443], 
    'ssh': [22,] 
}
# Tags which identify the security groups you want to update
SECURITY_GROUP_TAG_FOR_WEB = { 'LambdaUpdate': 'web'}
SECURITY_GROUP_TAG_FOR_SSH = { 'LambdaUpdate': 'ssh'}

def lambda_handler(event, context):
    cidr_blocks = list(event.values())
    result = update_security_groups(cidr_blocks)
    return result

def update_security_groups(cidr_blocks):
    client = boto3.client('ec2')

    web_group = get_security_groups_for_update(client, SECURITY_GROUP_TAG_FOR_WEB)
    ssh_group = get_security_groups_for_update(client, SECURITY_GROUP_TAG_FOR_SSH)
    print ('Found ' + str(len(web_group)) + ' WebSecurityGroups to update')
    print ('Found ' + str(len(ssh_group)) + ' SshSecurityGroups to update')

    result = list()
    web_updated = 0
    ssh_updated = 0
    for group in web_group:
        for port in INGRESS_PORTS['web']:
            if update_security_group(client, group, cidr_blocks, port):
                web_updated += 1
                result.append('Updated ' + group['GroupId'])
    for group in ssh_group:
        for port in INGRESS_PORTS['ssh']:
            if update_security_group(client, group, cidr_blocks, port):
                ssh_updated += 1
                result.append('Updated ' + group['GroupId'])

    result.append('Updated ' + str(web_updated) + ' of ' + str(len(web_group)) + ' WebSecurityGroups')
    result.append('Updated ' + str(ssh_updated) + ' of ' + str(len(ssh_group)) + ' SshSecurityGroups')

    return result

def update_security_group(client, group, cidr_blocks, port):
    added = 0
    if len(group['IpPermissions']) > 0:
        for permission in group['IpPermissions']:
            if permission['FromPort'] <= port and permission['ToPort'] >= port:
                old_prefixes = list()
                to_add = list()
                for cidr_block in cidr_blocks:
                    if old_prefixes.count(cidr_block) == 0:
                        to_add.append({ 'CidrIp': cidr_block })
                        print(group['GroupId'] + ": Adding " + cidr_block + ":" + str(permission['ToPort']))
                added += add_permissions(client, group, permission, to_add)
    else:
        to_add = list()
        for cidr_block in cidr_blocks:
            to_add.append({ 'CidrIp': cidr_block })
            print(group['GroupId'] + ": Adding " + cidr_block + ":" + str(port))
        permission = { 'ToPort': port, 'FromPort': port, 'IpProtocol': 'tcp'}
        added += add_permissions(client, group, permission, to_add)

    print (group['GroupId'] + ": Added " + str(added))
    return (added > 0)


def add_permissions(client, group, permission, to_add):
    if len(to_add) > 0:
        add_params = {
            'ToPort': permission['ToPort'],
            'FromPort': permission['FromPort'],
            'IpRanges': to_add,
            'IpProtocol': permission['IpProtocol']
        }

        client.authorize_security_group_ingress(GroupId=group['GroupId'], IpPermissions=[add_params])

    return len(to_add)

def get_security_groups_for_update(client, security_group_tag):
    filters = list();
    for key, value in security_group_tag.iteritems():
        filters.extend(
            [
                { 'Name': "tag-key", 'Values': [ key ] },
                { 'Name': "tag-value", 'Values': [ value ] }
            ]
        )

    response = client.describe_security_groups(Filters=filters)
    return response['SecurityGroups']


This function acts on security groups tagged “web” and “ssh”. For the ones tagged “web”, it adds new rules allowing the IP/CIDR block access to ports 80 and 443. For the groups tagged “ssh”, it does the same but for port 22.

The input for this function is {“ip1”: “$IPAddressBlock”} where IPAddressBlock is a Jenkins parameter that the user specifies when running the appropriate Jenkins job. In this case, I used the AWS Lambda Invocation build step in Jenkins.

Friday, December 29, 2017

Experiences with the Kong API Gateway

Kong is an open-source API Gateway that you can install on-premise if you don't want or cannot use the AWS API Gateway. The benefits of putting an API Gateway in front of your actual API endpoints are many: rate limiting, authentication, security, better logging, etc. Kong offers all of these and more via plugins. I've experimented with Kong a bit and I am writing down my notes here for future reference.

Installing Kong on Ubuntu 16.04

Install Kong from deb package

# wget https://bintray.com/kong/kong-community-edition-deb/download_file?file_path=dists/kong-community-edition-0.11.2.xenial.all.deb
# dpkg -i kong-community-edition-0.11.2.xenial.all.deb

Install PostgreSQL

# apt-get install postgresql

Create kong database and user

# su - postgres
$ psql
psql (10.1)
Type "help" for help.


postgres=# CREATE USER kong; CREATE DATABASE kong OWNER kong;
postgres=# ALTER USER kong PASSWORD 'somepassword';

Modify kong configuration

# cp /etc/kong/kong.conf.default /etc/kong/kong.conf

Edit kong.conf and set:

pg_host = 127.0.0.1             
pg_port = 5432                  
pg_user = kong                  
pg_password = somepassword
pg_database = kong              

Run kong migrations job

# kong migrations up

Increase open files limit

# ulimit -n 4096

Start kong

# kong start

Install kong dashboard

Install updated version of node first.

# curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
# bash nodesource_setup.sh
# apt-get install nodejs
# npm install -g kong-dashboard

Create systemd service to start kong-dashboard with basic auth

I specified port 8001 for kong-dashboard, instead of the default port 8080.

# cat /etc/systemd/system/multi-user.target.wants/kong-dashboard.service
[Unit]
Description=kong dashboard
After=network.target


[Service]
ExecStart=/usr/local/bin/kong-dashboard start --kong-url http://localhost:8001 --basic-auth someadminuser=someadminpassword


[Install]
WantedBy=multi-user.target


# systemctl daemon-reload
# systemctl start kong-dashboard

Adding an API to Kong

I used the Kong admin dashboard to add a new API Gateway object which points to the actual API endpoint I have. Note that the URL name as far as Kong is concerned can be anything you want. In this example I chose /kong1. The upstream URL is your actual API endpoint. The Hosts value is a comma-separated list of the host headers you want Kong to reply to. Here is contains the domain name for my actual API endpoint (stage.mydomain.com) as well as a new domain name that I will use later in conjunction with a load balancer in front of Kong (stage-api.mydomain.com).

  • Name: my-kong-api
  • Hosts: stage.mydomain.com, stage-api.mydomain.com
  • URLs: /kong1
  • Upstream URL: https://stage.mydomain.com/api/graphql/pub

At this point you can query the Kong admin endpoint for the existing APIs. We’ll use HTTPie (on a Mac you can install HTTPie via brew install httpie).

$ http http://stage.mydomain.com:8001/apis HTTP/1.1 200 OK Access-Control-Allow-Origin: * Connection: keep-alive Content-Type: application/json; charset=utf-8 Date: Fri, 29 Dec 2017 16:23:59 GMT Server: kong/0.11.2 Transfer-Encoding: chunked { "data": [ { "created_at": 1513401906560, "hosts": [ "stage-api.mydomain.com", "stage.mydomain.com" ], "http_if_terminated": true, "https_only": true, "id": "ad23a91a-b76a-417d-9889-0088a13e3419", "name": "my-kong-api", "preserve_host": true, "retries": 5, "strip_uri": true, "upstream_connect_timeout": 60000, "upstream_read_timeout": 60000, "upstream_send_timeout": 60000, "upstream_url": "https://stage.mydomain.com/api/graphql/pub", "uris": [ "/kong1" ] } ], "total": 1 }

Note that any operation that you do via the Kong dashboard can also be done via API calls to the Kong admin backend. For lots of examples on how to do that, see the official documentation as well as this blog post from John Nikolai.

Using the Kong rate-limiting plugin

In the Kong admin dashboard, go to Plugins -> Add:
  • Plugin name: rate-limiting
  • Config: minute = 10
This means that we are limiting the rate of calls to our API to 10 per minute. You can also specify limits per other units of time. See the rate-limiting plugin documentation for more details.

Verifying that our API endpoint is rate-limited


Now we can verify that the API endpoint is working by issuing a POST request to the Kong URL (/kong1) which points to our actual API endpoint:


$ http --verify=no -v POST "https://stage.mydomain.com:8443/kong1?query=version"
POST /kong1?query=version HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: stage.mydomain.com:8443
User-Agent: HTTPie/0.9.9


HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 43
Content-Type: application/json
Date: Sat, 16 Dec 2017 18:16:53 GMT
Server: nginx/1.10.3 (Ubuntu)
Via: kong/0.11.2
X-Kong-Proxy-Latency: 86
X-Kong-Upstream-Latency: 33
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 9


{
   "data": {
       "version": "21633901644387004745"
   }
}

A few things to notice in the call above:

  • we are making an SSL call; by default Kong exposes SSL on port 8443
  • we are passing a query string to the /kong1 URL endpoint; Kong will pass this along to our actual API
  • the HTTP reply contains 2 headers related to rate limiting:
    • X-RateLimit-Limit-minute: 10 (this specifies the rate we set for the rate-limiting plugin)
    • X-RateLimit-Remaining-minute: 9 (this specifies the remaining calls we have)

After 9 requests in quick succession, the rate-limiting headers are:


X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 1


After 10 requests:


X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 0


After the 11th request we get an HTTP 429 error:


HTTP/1.1 429
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Sat, 16 Dec 2017 18:21:19 GMT
Server: kong/0.11.2
Transfer-Encoding: chunked
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 0


{
   "message": "API rate limit exceeded"
}

Putting a load balancer in front of Kong

Calling Kong APIs on non-standard port numbers gets cumbersome. To make things cleaner, I put an AWS ALB in front of Kong. I added a proper SSL certificate via AWS Certificate Manager for the domain stage-api.mydomain.com and associated it to a listener on port 443 of the ALB. I also created two Target Groups, one for HTTP traffic to port 80 on the ALB and one for HTTPS 
traffic to port 443 on the ALB. The first Target Group points to port 8000 on the server running Kong, because that's the port Kong uses for plain HTTP requests. The second Target Group points to port 8443 on the server running Kong, for HTTPS requests.

I was now able to make calls such as these:

$ http -v POST "https://stage-api.mydomain.com/kong1?query=version"
POST /kong1?query=version HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: stage-api.mydomain.com
User-Agent: HTTPie/0.9.9

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 42
Content-Type: application/json
Date: Fri, 29 Dec 2017 17:42:14 GMT
Server: nginx/1.10.3 (Ubuntu)
Via: kong/0.11.2
X-Kong-Proxy-Latency: 94
X-Kong-Upstream-Latency: 29
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 9

{
    "data": {
        "version": "21633901644387004745"
    }
}

Using the Kong syslog plugin

I added the syslog plugin via the Kong admin dashboard. I set the following values:

  • server_errors_severity: err
  • successful_severity: notice
  • client_errors_severity: err
  • log_level: notice
I then created a file called kong.conf in /etc/rsyslog.d on the server running Kong:


# cat kong.conf
if ($programname == 'kong' and $syslogseverity-text == 'notice') then -/var/log/kong.log
& ~

# service rsyslog restart

Now every call to Kong is logged in syslog format in /var/log/kong.log. The message portion of the log entry is in JSON format.

Sending Kong logs to AWS ElasticSearch/Kibana

I used the process I described in my previous blog post on AWS CloudWatch Logs and AWS ElasticSearch. One difference was that for the Kong logs I had to use a new index in ElasticSearch. 

This was because the JSON object logged by Kong contains a field called response, which was clashing with another field called response already present in the cwl-* index in Kibana. What I ended up doing was copying the Lambda function used to send CloudWatch logs to ElasticSearch and replacing cwl- with cwl-kong- in the transform function. This created a new index cwl-kong-* in ElasticSearch, and at that point I was able to add that index to a Kibana dashboard and visualize and query the Kong logs.

This is just scratching the surface of what you can do with Kong. Here are a few more resources:

Friday, November 24, 2017

Using AWS CloudWatch Logs and AWS ElasticSearch for log aggregation and visualization

If you run your infrastructure in AWS, then you can use CloudWatch Logs and AWS ElasticSearch + Kibana for log aggregation/searching/visualization as an alternative to either rolling your own ELK stack, or using a 3rd party SaaS solution such as Logentries, Loggly, Papertrail or the more expensive Splunk, Sumo Logic etc.

Here are some pointers on how to achieve this.

1) Create IAM policy and role allowing read/write access to CloudWatch logs

I created a IAM policy called cloudwatch-logs-access with the following content:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogStreams"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}


Then I create an IAM role called cloudwatch-logs-role and attached the cloudwatch-logs-access policy to it.

2) Attach IAM role to EC2 instances

I attached the cloudwatch-logs-role IAM role to all EC2 instances from which I wanted to send logs to CloudWatch (I went to Actions --> Instance Settings --> Attach/Replace IAM Role and attached the role)

3) Install and configure CloudWatch Logs Agent on EC2 instances

I followed the instructions here for my OS, which is Ubuntu.

I first downloaded a Python script:

# curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O

Then I ran the script in the region where my EC2 instances are:


# python awslogs-agent-setup.py --region us-west-2 Launching interactive setup of CloudWatch Logs agent ... Step 1 of 5: Installing pip ...DONE Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... DONE Step 3 of 5: Configuring AWS CLI ... AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [us-west-2]: Default output format [None]: Step 4 of 5: Configuring the CloudWatch Logs Agent ... Path of log file to upload [/var/log/syslog]: Destination Log Group name [/var/log/syslog]: Choose Log Stream name: 1. Use EC2 instance id. 2. Use hostname. 3. Custom. Enter choice [1]: 2 Choose Log Event timestamp format: 1. %b %d %H:%M:%S (Dec 31 23:59:59) 2. %d/%b/%Y:%H:%M:%S (10/Oct/2000:13:55:36) 3. %Y-%m-%d %H:%M:%S (2008-09-08 11:52:54) 4. Custom Enter choice [1]: 3 Choose initial position of upload: 1. From start of file. 2. From end of file. Enter choice [1]: 1 More log files to configure? [Y]:

I continued by adding more log files such as apache access and error logs, and other types of logs.

You can start/stop/restart the CloudWatch Logs agent via:

# service awslogs start

The awslogs service writes its logs in /var/log/awslogs.log and its configuration file is in /var/awslogs/etc/awslogs.conf.

4) Create AWS ElasticSearch cluster

Not much to say here. Follow the prompts in the AWS console :)

For the initial Access Policy for the ES cluster, I chose an IP-based policy and specified the source CIDR blocks allowed to connect:

 "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:us-west-2:accountID:domain/my-es-cluster/*",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": [
            "1.2.3.0/24",
            "4.5.6.7/32"
          ]
        }
      }
    }

5) Create subscription filters for streaming CloudWatch logs to ElasticSearch

First, make sure that the log files you configured with the AWS CloudWatch Log agent are indeed sent to CloudWatch. For each log file name, you should see a CloudWatch Log Group with that name, and inside the Log Group you should see multiple Log Streams, each Log Stream having the same name as the hostname sending those logs to CloudWatch.

I chose one of the Log Streams, went to Actions --> Stream to Amazon Elasticsearch Service, chose the ElasticSearch cluster created above, then created a new Lambda function to do the streaming. I had to create a new IAM role for the Lambda function. I created a role I called lambda-execution-role and associated with it the pre-existing IAM policy AWSLambdaBasicExecutionRole.

Once this Lambda function is created, subsequent log subscription filters for other Log Groups will reuse it for streaming to the same ES cluster.

One important note here is that you also need to allow the role lambda-execution-role to access the ES cluster. To do that, I modified the ES access policy and added a statement for the ARN of this role:

    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::accountID:role/lambda-execution-role"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:us-west-2:accountID:domain/my-es-cluster/*"
    }

6) Configure index pattern in Kibana

The last step is to configure Kibana to use the ElasticSearch index for the CloudWatch logs. If you look under Indices in the ElasticSearch dashboard, you should see indices of the form cwl-2017.11.24. In Kibana, add an Index Pattern of the form cwl-*. It should recognize the @timestamp field as the timestamp for the log entries, and create the Index Pattern correctly.

Now if you go to the Discover screen in Kibana, you should be able to visualize and search your log entries streamed from CloudWatch.

Monday, July 31, 2017

Apache 2.4 authentication and whitelisting scenarios

I have these examples scattered among many Apache installations, so I wanted to gather my notes here for my benefit, and hopefully for others as well. The following scenarios depict various requirements for Apache 2.4 authentication and whitelisting. They are all for Apache 2.4.x running on Ubuntu 14.04/16.04.

Scenario 1: block all access to Apache except to a list of whitelisted IP addresses and networks

Apache configuration snippet:

  <Directory /var/www/html/>
     IncludeOptional /etc/apache2/whitelist.conf
     Order allow,deny
     Allow from all
  </Directory>

Contents of whitelist.conf file:

# local server IPs
Require ip 127.0.0.1
Require ip 172.31.2.2

# Office network
Require ip 1.2.3.0/24

# Other IP addresses
Require ip 4.5.6.7/32
Require ip 5.6.7.8/32
etc.

Scenario 2: enable basic HTTP authentication but allow specific IP addresses through with no authentication

Apache configuration snippet:

  <Directory /var/www/html/>
     AuthType basic
     AuthBasicProvider file
     AuthName "Restricted Content"
     AuthUserFile /etc/apache2/.htpasswd

     Require valid-user
     IncludeOptional /etc/apache2/whitelist.conf
     Satisfy Any
  </Directory>

The contents of whitelist.conf are similar to the ones in Scenario 1.

Scenario 3: enable basic HTTP authentication but allow access to specific URLs with no authentication

Apache configuration snippet:

  <Directory /var/www/html/>
     Order allow,deny
     Allow from all

     AuthType Basic
     AuthName "Restricted Content"
     AuthUserFile /etc/apache2/.htpasswd

     SetEnvIf Request_URI /.well-known/acme-challenge/*  noauth=1
     <RequireAny>
       Require env noauth
       Require valid-user
     </RequireAny>
  </Directory>

This is useful when you install SSL certificates from Let's Encrypt and you need to allow the Let's Encrypt servers access to the HTTP challenge directory.

Thursday, June 01, 2017

SSL termination and http caching with HAProxy, Varnish and Apache


A common requirement when setting up a development or staging server is to try to mimic production as much as possible. One scenario I've implemented a few times is to use Varnish in front of a web site but also use SSL. Since Varnish can't handle encrypted traffic, SSL needs to be terminated before it hits Varnish. One fairly easy way to do it is using HAProxy to terminate both HTTP and HTTPS traffic, then forwarding the unencrypted traffic to Varnish, which then forwards non-cached traffic to Apache or nginx. Here are the steps to achieve this on an Ubuntu 16.04 box.

1) Install HAProxy and Varnish

# apt-get install haproxy varnish


2) Get SSL certificates from Let’s Encrypt

# wget https://dl.eff.org/certbot-auto
# chmod +x certbot-auto
# ./certbot-auto -a webroot --webroot-path=/var/www/mysite.com -d mysite.com certonly

3) Generate combined chain + key PEM file to be used by HAProxy

# cat /etc/letsencrypt/live/mysite.com/fullchain.pem /etc/letsencrypt/live/mysite.com/privkey.pem > /etc/ssl/private/mysite.com.pem

4) Configure HAProxy

Edit haproxy.cfg and add frontend sections for ports 80 and 443 + backend section pointing to varnish on port 8888

# cat /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3
        tune.ssl.default-dh-param 2048

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

frontend www-http
   bind 172.31.8.204:80
   http-request set-header "SSL-OFFLOADED" "1"
   reqadd X-Forwarded-Proto:\ http
   default_backend varnish-backend

frontend www-https
   bind 172.31.8.204:443 ssl crt mysite.com.pem
   http-request set-header "SSL-OFFLOADED" "1"
   reqadd X-Forwarded-Proto:\ https
   default_backend varnish-backend

backend varnish-backend
   redirect scheme https if !{ ssl_fc }
   server varnish 172.31.8.204:8888 check

Enable UDP in rsyslog for haproxy logging by uncommenting 2 lines in /etc/rsyslog.conf:

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")

Restart rsyslog and haproxy

# service rsyslog restart
# service haproxy restart

5) Configure varnish to listen on port 8888

Ubuntu 16.04 is using systemd for service management. You need to edit 2 files to configure the port varnish will listen on:

/lib/systemd/system/varnish.service
/etc/default/varnish

In both, set the port after the -a flag to 8888, then stop the varnish service, reload the systemctl daemon and restart the varnish service:

# systemctl stop varnish.service
# systemctl daemon-reload
# systemctl start varnish.service

By default, Varnish will send non-cached traffic to port 8080 on localhost.

6) Configure Apache or nginx to listen on 8080

For Apache, change port 80 to 8080 in all virtual hosts, and also change 80 to 8080 in /etc/apache2/ports.conf.




Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...