Tuesday, February 24, 2009

You're not a cloud provider if you don't provide an API

Cloud computing is all the rage these days, with everybody and their brother claiming to be a 'cloud provider'. Just because hosting companies have a farm of virtual servers that they can parcel out to their customers, it doesn't mean that they are operating 'in the cloud'. For that to be the case, they need to offer a solid API that allows their customers to manage resources such as virtual server instances, storage mounts, IP addresses, load balancer pools, firewall rules, etc.

A short discussion on 'XaaS' nomenclature is in order here: 'aaS' stands for 'as a Service', and X can take various values, for example P==Platform, S==Software, I==Infrastructure. You will see these acronyms in pretty much every industry-sponsored article about cloud computing. Pundits seem to love this kind of stuff. When I talk about cloud providers in this post, I mean providers of 'Infrastructure as a Service', things like the ones I mentioned above -- virtual servers, networking and storage resources, in short the low-level plumbing of an infrastructure.

A good example of 'Platform as a Service' is Google AppEngine, which offers both a development environment (right now Python-specific), and an API to interact with the 'Google cloud' when deploying your GAE application.

'Software as a Service' is pretty much what 'ASP' used to be in the dot com days (ASP == Application Service Provider if you don't remember your acronyms). The poster child for SaaS these days seems to be salesforce.com. I do however emphasize that one significant difference between SaaS and ASP is that SaaS providers DO offer an API for your application to interact with the resources they expose.

So...the common thread between the XaaS offerings is the existence of an API which allows you, as a systems and/or application architect, to interact with and manage the resources offered by the particular provider.

I've been using two cloud APIs here at OpenX, one from AppNexus and one from Amazon EC2. The AppNexus API allows you to reserve physical servers, start up, shut down and delete virtual instances on each server, clone a virtual instance, manage load balancer pools and SSL certificates at the LB level, etc. In short, it's a very solid and easy to use API.

The Amazon EC2 API is more fine grained than the one from AppNexus, which can be an advantage, but also makes it hard to coordinate the management of various resources. For example, to launch an EC2 instance you first need to create a keypair, potentially a security group, maybe an EBS volume and an elastic IP, and only then you can tie everything together via yet other EC2 API calls. For this reason, we're building our own tools around the Amazon API, tools which allow us to deploy an instance with all its associated resources via a single command-line script (and yes, we call this collection of tools the MCP). We're also using slack to deploy specific packages and applications to each instance we launch, but that's a topic for another post.

So what does all this mean to you as a systems or application architect? For a system administrator, I think it means that you need to shore up your programming skills so that you will be able to take advantage of these APIs and automate the deployment, testing and scaling of your infrastructure. For an application architect, it means that you need to shore up your sysadmin skills so you can understand the lower-level resources exposed by cloud APIs and use them to your full advantage. I think the future is bright for people who possess both sets of skills.

Tuesday, February 17, 2009

Helping the 'printable world wide web' movement

Alexander Artemenko pointed out to me that my blog lacks a sane CSS stylesheet for printing. He was nice enough to provide one for me (since I'm no CSS wizard), and I inserted it into my blog's template. Alexander runs a campaign for convincing bloggers to make their blog content printable.

BTW, here's all I had to add to my Blogger template to make the content printable:


<style type="text/css">
@media print {
#sidebar, #navbar-iframe, #blog-header,
#comments h4, #comments-block, #footer,
span.statcounter, #b-backlink {display: none;}
#wrap, #content, #main-content {width: 100%; margin: 0; background: #FFFFFF;}
}
</style>

Wednesday, February 04, 2009

Load Balancing in Amazon EC2 with HAProxy

Until the time comes when Amazon will offer a load balancing service in their EC2 environment, people are forced to use a software-based load balancing solution. One of the most common out there is HAProxy. I've been looking at it for the past 2 months or so, and recently we started to use it in production here at OpenX. I am very impressed with its performance and capabilities. I'll explore here some of the functionality that HAProxy offers, and also discuss some of the non-obvious aspects of its configuration.

Installation

I installed HAProxy via yum. Here's the version that was installed using the default CentOS repositories on a CentOS 5.x box:
# yum list installed | grep haproxy

haproxy.i386 1.3.14.6-1.el5 installed


The RPM installs an init.d service called haproxy that you can use to start/stop the haproxy process.

Basic Configuration

In true Unix fashion, all configuration is done via a text file: /etc/haproxy/haproxy.cfg. It's very important that you read the documentation for the configuration file. The official documentation for HAProxy 1.3 is here.

Emulating virtual servers

In version 1.3, you can specify a frontend section, which defines an IP address/port pair for requests coming into the load balancer (think of it as a way to specify a virtual server/virtual port pair on a traditional load balancer), and multiple backend sections for each frontend, which correspond to the real IP addresses and ports of the backend servers handling the requests. If you can assign multiple external IP addresses to your HAProxy server, then you can have each one of these IPs function as a virtual server (via a frontend declaration), sending traffic to real servers declared in a backend.

However, one fairly large limitation of EC2 instances is that you only get one external IP address per instance. This means that you can have HAProxy listen on port 80 on a single IP address in EC2. How then can you have multiple 'virtual servers' on an EC2 HAProxy load balancer? The answer is in a new feature of HAProxy called ACLs.

Here's what the official documentation says:

2.3) Using ACLs
---------------

The use of Access Control Lists (ACL) provides a flexible solution to perform
content switching and generally to take decisions based on content extracted
from the request, the response or any environmental status. The principle is
simple :

- define test criteria with sets of values
- perform actions only if a set of tests is valid

The actions generally consist in blocking the request, or selecting a backend.

So let's say for example that you want to handle both www.example1.com and www.example2.com using the same HAProxy instance, but you want to load balance traffic for www.example1.com to server1 and server2 with IP addresses 192.168.1.1 and 192.168.1.2, while traffic for www.example2.com gets load balanced to server3 and server4 with IP addresses 10.0.0.3 and 10.0.0.4. Traffic for other domains will be sent to a default backend.

First, you define a frontend section in haproxy.cfg similar to this:

frontend myfrontend *:80
log global
maxconn 25000
option forwardfor
acl acl_example1 url_sub example1
acl acl_example2 url_sub example2
use_backend example1_farm if acl_example1
use_backend example2_farm if acl_example2
default_backend default_farm

This tells haproxy that there are 2 ACLs defined -- one called acl_example1, which is triggered if the incoming HTTP request is for a URL that contains the expression 'example1', and one called acl_example2, which is triggered if the incoming HTTP request is for a URL that contains the expression 'example2'.

If acl_example1 is triggered, the backend used will be example1_farm. If acl_example2 is triggered, the backend used will be example2_farm. If no acl is triggred, the default backend used will be default_farm.

This is the simplest form of ACLs. HAProxy supports many more, and you're strongly advised to read the ACL section in the documentation for a more in-depth discussion. However, the URL-based ACLs are very useful especially in an EC2 environment.

The backend sections of haproxy.cfg will look similar to this:

backend example1_farm
mode http
balance roundrobin
server server1 192.168.1.1:80 check
server server2 192.168.1.2:80 check
backend example2_farm
mode http
balance roundrobin
server server3 10.0.0.3:80 check
server server4 10.0.0.4:80 check
backend default_farm
mode http
balance roundrobin
server server5 192.168.1.5:80 check
server server6 192.168.1.6:80 check

Logging

You can have haproxy log to syslog, but first you need to allow syslog to receive UDP traffic from 127.0.0.1 on port 514. I'll discuss syslog-ng here, with its configuration file in /etc/syslog-ng/syslog-ng.conf. To allow the UDP traffic I mention, add the line 'udp(ip(127.0.0.1) port(514));' to the source s_sys section, which in my case looks like this:

source s_sys {
file ("/proc/kmsg" log_prefix("kernel: "));
unix-stream ("/dev/log");
internal();
udp(ip(127.0.0.1) port(514));
};
Also add a filter for facility local 0:

filter f_filter9 { facility(local0); };
And finally associate that filter with the d_mesg destination, which sends messages to /var/log/messages:

log { source(s_sys); filter(f_filter9); destination(d_mesg); };
Restart syslog-ng via its init.d script.

Now for the HAProxy configuration -- you need to have a line similar to this in the 'global' section of haproxy.cfg:

global
log 127.0.0.1 local0 info
This tells haproxy to log to facility 'local0' on the localhost using the severity 'info'. You could send logs to a remote syslog server just as well.

Once you define this in the global section, you can specify the logging mechanism either in the default section (which means that all frontends will log in this way), or by a frontend-to-frontend case. If you want to have it in the default section, just write:

defaults
log global
Once you restart haproxy, you should see messages like this in /var/log/messages:
Feb  2 22:39:49 127.0.0.1 haproxy[19150]: Connect from A.B.C.D:44463 to 10.0.0.1:80 (your_frontend_name/HTTP)
However, if you want you're handling HTTP traffic and you would like to see the exact HTTP requests handled by HAProxy, you need to add this line either to the default section, or to a specific frontend:

mode httplog
In this case, the log will contain lines that look like a regular Apache combined log line.

A caveat: if you do enable logging in httplog mode, make sure /var has lots of disk space. If your HAProxy will handle a lot of traffic, the messages file will become very large, very fast. Just don't have /var be part of the typically small / partition, or you can be in a world of trouble.

Logging the client source IP in the backend web logs

One issue with load balancers and reverse proxies is that the backend servers will see traffic as always originating from the IP address of the LB or reverse proxy. This is obviously a problem when you're trying to get stats from your web logs. To mitigate this issue, many LBs/proxies use the X-Forwarded-For header to send the IP address of the client to the destination server. HAProxy offers this functionality via the forwardfor option. You can simply declare

option forwardfor

in your backend, and all your backend servers will receive the X-Forwarded-For header.

Of course, you also have to tell your Web server to handle this header in its log file. In Apache you need to modify the LogFormat directive and replace %h with %{X-Forwarded-For}i.

SSL

To handle SSL traffic in HAProxy, you need 3 things:

1) Define a frontend with a unique name which handles *:443
2) Send traffic to real_server_IP_1:443 through real_server_IP_N:443 in the backend(s) associated with the frontend
3) Specify 'mode tcp' instead of 'mode http' both in the frontend section and in the backend section(s) which handle port 443. Otherwise you won't see any SSL traffic hitting your real servers, and you'll wonder why....

Load balancing algorithms

HAProxy can handle several load balancing algorithms:
  • round-robin: requests are rotated among the servers in the backend -- note that servers declared in the backend section also accept a weight parameter which specifies their relative weight in that backend; the round-robin algorithm will respect that weight ratio
  • leastconn: the request is sent to the server with the lowest number of connections; round-robin is used if servers are similarly loaded
  • source: a hash of the source IP is divided by the total weight of the running servers to determine which server will receive the request; this ensures that clients from the same IP address always hit the same server, which is a poor man's session persistence solution
  • uri: the part of the URL up to a question mark is hashed and used to choose a server that will handle the request; this is useful when you want certain sub-parts of your web site to be served by certain servers (this is used with proxy caches to maximize the cache hit rate)
  • url_param: can be used to check certain parts of the URL, for example values sent via POST requests; for example a request which specifies a user_id parameter with a certain value can get directed to the same server using the url_param method -- so this is another form of achieving session persistence in some cases (see the documentation for more details)
Session persistence with cookies

If you're OK with the fact that not all client browsers accept cookies, and you still want to use cookies as a session persistence mechanism, then HAProxy offers an easy way to do so. If you add this line to the backend section:

cookie SERVERID insert nocache indirect

then you're telling HAProxy to insert a cookie named SERVERID in the HTTP response; the cookie will be sent to the client browser via a Set-Cookie header in the response, and which is sent back by the client in a Cookie header in all subsequent requests. Note that this cookie is only a session cookie, and will not be written to disk by the client browser. For this reason, and for issues related to caching, the documentation recommends specifying the other 2 options 'nocache' and 'indirect'. In particular, 'indirect' means that the cookie will be removed from the HTTP request once it is processed by HAProxy, so your application running on the backend servers will never see it.

Once you define the cookie, you need to associate it with the servers in the backend, like this:

server server1 10.1.1.1:80 cookie server01 check
server server2 10.1.1.2:80 cookie server02 check

If a client request will get sent to server serverN initially, the cookie will insert a SERVERID corresponding to serverN in the response. In the requests that follow, the client will send back this SERVERID in the cookie and hence will be directed to the same server for the duration of the session.

Server health checks

HA
Proxy verifies the health of the servers declared in the backend section by sending them periodic HTTP requests. You need to specify 'check' in the server declaration line. Here is the appropriate section from the official documentation:
check
This option enables health checks on the server. By default, a server is
always considered available. If "check" is set, the server will receive
periodic health checks to ensure that it is really able to serve requests.
The default address and port to send the tests to are those of the server,
and the default source is the same as the one defined in the backend. It is
possible to change the address using the "addr" parameter, the port using the
"port" parameter, the source address using the "source" address, and the
interval and timers using the "inter", "rise" and "fall" parameters. The
request method is define in the backend using the "httpchk", "smtpchk",
and "ssl-hello-chk" options. Please refer to those options and parameters for
more information.

Performance tuning

Section 1.2 of the official documentation details the variables you can set to tweak maximum performance out of your HAProxy. The only parameter I found critical so far is maxconn, which in some of the sample configuration files was set to 2,000. This means that if HAProxy is hit with more than 2,000 concurrent connections, only the first 2,000 will be serviced, and the subsequent ones will be queued. For this reason, I recommend you set maxconn to a high number (such as 25,000 for example) in all the sections of your haproxy.cfg file: default, frontend and backend.

From what I've seen so far, the performance of HAProxy itself is very satisfactory. Even on an EC2 m1.small instance, HAProxy took less than 1% CPU for a web site we maintain that was hit with around 20,000 connections. I can guarantee that you will discover many other bottlenecks in your infrastructure long before HAProxy itself becomes your bottleneck. The only caveat in all this is the maxconn parameter above, which you do need to set to a high value to avoid unnecessary throttling of connections at the HAProxy layer.

Utilization statistics

HAProxy offers very nice utilization statistics, with tables showing the servers in all declared backends. Here's how these tables look like:

my_website

QueueSessionsBytesDeniedErrorsWarningsServer
CurMaxLimitCurMaxLimitTotalLbTotInOutReqRespReqConnRespRetrRedisStatusWghtActBckChkDwnDwntmeThrtle
server0100-12704-177432175934132022286108951418
0
5672 1498
1h6m UP1Y-861056s-
server0200-13716-177364176665132655773110034272
0
13218 699
10m48s UP1Y-54525s

To enable stats, add lines such as these to the either the 'defaults' section, or to a specific backend section:

stats enable
stats uri /lb?stats
stats realm Haproxy\ Statistics
stats auth myusername:mypassword


Then hit http://external.ip.of.haproxy/lb?stats and you'll be presented with a basic HTTP authentication dialog. Log in with the credentials you specified.

High-availability strategies

In an ideal situation, you would have 2 HAProxy instances using a heartbeat-type protocol and sharing an external IP address. In case one of them goes down, the other one would assume the IP and your site will be available at all times. You could use Linux-HA, or Wackamole and the Spread toolkit. However, this is not possible in Amazon EC2 because IP addresses cannot be shared among instances in the manner that heartbeat-type protocols expect.

What you can do instead is to use an Elastic IP and associate it with your HAProxy instance. Then you can have another stand-by HAProxy instance kept in sync with the live one (only the haproxy.cfg needs to be rsync-ed across). Your monitoring system can then detect when the live HAProxy instance goes down, and automatically assign the Elastic IP address to the other instance using for example the EC2 API Tools command ec2-associate-address.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...