Thursday, September 29, 2005

HoneyMonkeys: an adventure in black box testing

HoneyMonkeys is the name of a Microsoft research project in computer security. It combines the concept of honeypots with an attitude of "monkey see, monkey do". Specifically, it consists of a cluster of WinXP machines with various configurations (SP1, SP2 non-patched, SP2 partially patched, SP2 fully patched) running as Virtual Machines for easy rollout and reloading.

The XP machines run the IE browser in an automated fashion, pointing it to sites known or suspected for hosting malware. Each machine also runs monitoring software that records every single file and Registry read/write, as well as any attempt to hook malware into Auto-Start Extensibility Points -- for many more details on this see this research report from Microsoft. The machines act as "monkeys" by merely pointing the browser to suspected malicious Web sites and then waiting for a few minutes. The automated IE drivers do not click on any dialog box elements that might prompt for installation of software. Thus, every file that gets created outside the browser's temporary directory, and every Registry write means that malware was installed automatically, without the action of the "user" (i.e. the monkey in this case). When a machine detects that malware was installed, it forwards the URL to a "better" machine (in terms of service packs and patches installed on it) in the cluster. If the URL gets to a fully patched machine and still results in the installation of malware, it means that a zero-day exploit has been found, i.e. an exploit that exists in the wild for which there is no available patch.

As the authors of the research report point out, this approach qualifies as "black-box", since it simply points the browser to various URLs and watches for modifications to the file system, the registry and the memory. A more "white-box" approach would be to attempt to identify malware by trying to match signatures or behaviors against a known list/database. The black-box approach turns out to be much simpler to implement and very effective. The authors report finding the first zero-day exploit using their HoneyMonkeys setup in July 2005.

I think there are a lot of lessons in this stories for us testers:
  • Use Virtual Machine technologies such as VMWare or VirtualPC for easy rollout and reload of multiple OS/software configurations -- when a HoneyMonkey machine is infected with malware, its Virtual Machine image is simply reloaded from a "golden image"
  • Automate, automate, automate -- there is no way "real monkeys" in the shape of humans can click through thousands of URLs in order to find the ones that host malware
  • Apply the KISS principle -- the monkey software is purposely kept simple and stupid; the intelligence resides with the various pieces of monitoring software that watch for modifications to the host machine
  • Don't underestimate black-box techniques -- there is a tendency to relegate black-box techniques to a second-rate status compared to white-box testing; as the HoneyMonkey project demonstrates, sometimes the easier way out is better
For system/security administrators who deal with XP, the bigger lesson is of course to fully patch their machines and instruct their users not to click on popups and other prompts. This is of course easier said than done.

Friday, September 23, 2005

Oblique Strategies and testing

A message posted to comp.lang.python pointed me to a post by Robin Parmar on Oblique Strategies. I had read about this concept before, but I didn't really delve into it, so it was nice to see it mentioned again. The Oblique Strategies are one-line sentences devised by Brian Eno and Peter Schmidt as ways to "jog your mind" and get you unstuck in moments when your creative juices don't flow as much as you would like to. They offer "tangential" solutions to problems, as opposed to the more obvious, and oftentimes futile, "head-on" solutions.

It strikes me that the Oblique Strategies could be an important tool in a tester's arsenal. After all, good testers should be able to "sniff" problems that are not obvious; they should be able to go on "tangents" at any time, to follow their intuition in finding bugs that might be due to subtle interactions. I find it funny that, according to Brian Eno, the very first Oblique Strategy he wrote was "Honour thy error as a hidden intention." Errors, bugs...sounds pretty familiar to me!

I was thrilled when I saw that Robin wrote a Python script that emits a randomly chosen Oblique Strategy every time it's run. I plan on using it regularly to jog my devious tester mind :-)

Here's one strategy that was already printed twice by the script, so I'd better pay attention to it today: Slice into equal pieces. I can't really tell you what that means, I'm not yet done mind-jogging....

Web app testing with Python part 3: twill

In a recent thread on comp.lang.python, somebody was inquiring about ways to test whether a Web site is up or not from within Python code. Some options were proposed, among which I referred the OP to twill, a Web application testing package written in pure Python by Titus Brown (who, I can proudly say, is a fellow SoCal Piggie).

I recently took the latest version of twill for a ride and I'll report here some of my experiences. My application testing scenario was to test a freshly installed instance of Bugzilla. I wanted to see that I can correctly post bugs and retrieve bugs by bug number. Using twill, all this proved to be a snap.

First, a few words about twill: it's a re-implementation of Cory Dodt's PBP package based on the mechanize module written by John J. Lee. Since mechanize implements the HTTP request/response protocol and parses the resulting HTML, we can categorize twill as a "Web protocol driver" tool (for more details on such taxonomies, see a previous post of mine).

Twill can be used as a domain specific language via a command shell (twill-sh), or it can be used as a normal Python module, from within your Python code. I will show both usage models.

After downloading twill and installing it via the usual "python setup.py install" method, you can start its command line interpreter via the twill-sh script installed in /usr/local/bin. At the interpreter prompt, you can then issue commands such as:
  • go -- visit the given URL.
  • code -- assert that the last page loaded had this HTTP status, e.g. code 200 asserts that the page loaded fine.
  • find -- assert that the page contains this regular expression.
  • showforms -- show all of the forms on the page.
  • formvalue --- set the given field in the given form to the given value. For read-only form widgets/controls, the click may be recorded for use by submit, but the value is not changed.
  • submit [] -- click the n'th submit button, if given; otherwise submit via the last submission button clicked; if nothing clicked, use the first submit button on the form.
Let's see a quick example of the twill shell in action. As I mentioned before, I wanted to test a freshly-installed instance of Bugzilla, namely I wanted to verify that I can add new bugs and then retrieve them via their bug number. Here is a shell session fragment that opens the Bugzilla main page via the go command and clicks on the "Enter a new bug report" link via the follow command:

[ggheo@concord twill-latest]$ twill-sh

-= Welcome to twill! =-

current page: *empty page*
>> go http://example.com/bugs/
==> at http://example.com/bugs/
current page: http://example.com/bugs/
>> follow "Enter a new bug report"
==> at http://example.com/bugs/enter_bug.cgi
current page: http://example.com/bugs/enter_bug.cgi

At this point, we can issue the showforms command to see what forms are available on the current page.

>> showforms
Form #1
## __Name______ __Type___ __ID________ __Value__________________
Bugzilla ... text (None)
Bugzilla ... password (None)
product hidden (None) TestProduct
1 GoAheadA ... submit (None) Login
Form #2
## __Name______ __Type___ __ID________ __Value__________________
a hidden (None) reqpw
loginname text (None)
1 submit (None) Submit Request
Form #3
## __Name______ __Type___ __ID________ __Value__________________
id text (None)
1 submit (None) Find
current page: http://example.com/bugs/enter_bug.cgi

It looks like we're on the login page. We can then use the formvalue (or fv for short) command to fill in the required fields (user name and password), then the submit command in order to complete the log in process. The submit command takes an optional argument -- the number of the submit button you want to click. With no arguments, it activates the first submit button it finds.

>> fv 1 Bugzilla_login grig@example.com
current page: http://example.com/bugs/enter_bug.cgi
>> fv 1 Bugzilla_password mypassword
current page: http://example.com/bugs/enter_bug.cgi
>> submit 1
current page: http://example.com/bugs/enter_bug.cgi

At this point, we can verify that we received the expected HTTP status code (200 when everything was OK) via the code command:

>> code 200
current page: http://example.com/bugs/enter_bug.cgi

We run showforms again to see what forms and fields are available on the current page, then we use fv to fill in a bunch of fields for the new bug we want to enter, and finally we submit the form (note how nicely twill displays the available fields, as well as the first few selections available in drop-down combo boxes) :

>> showforms
Form #1
## __Name______ __Type___ __ID________ __Value__________________
product hidden (None) TestProduct
version select (None) ['other'] of ['other']
component select (None) ['TestComponent'] of ['TestComponent']
rep_platform select (None) ['Other'] of ['All', 'DEC', 'HP', 'M ...
op_sys select (None) ['other'] of ['All', 'Windows 3.1', ...
priority select (None) ['P2'] of ['P1', 'P2', 'P3', 'P4', 'P5']
bug_severity select (None) ['normal'] of ['blocker', 'critical' ...
bug_status hidden (None) NEW
assigned_to text (None)
cc text (None)
bug_file_loc text (None) http://
short_desc text (None)
comment textarea (None)
form_name hidden (None) enter_bug
1 submit (None) Commit
2 maketemplate submit (None) Remember values as bookmarkable template
Form #2
## __Name______ __Type___ __ID________ __Value__________________
id text (None)
1 submit (None) Find
current page: http://example.com/bugs/enter_bug.cgi
>> fv 1 op_sys "Linux"
current page: http://example.com/bugs/enter_bug.cgi
>> fv 1 priority P1
current page: http://example.com/bugs/enter_bug.cgi
>> fv 1 assigned_to grig@example.com
current page: http://example.com/bugs/enter_bug.cgi
>> fv 1 short_desc "twill-generated bug"
current page: http://example.com/bugs/enter_bug.cgi
>> fv 1 comment "This is a new bug opened automatically via twill"
current page: http://example.com/bugs/enter_bug.cgi
>> submit
Note: submit is using submit button: name="None", value=" Commit "
current page: http://example.com/bugs/post_bug.cgi

Now we can verify that the bug with the specified description was posted. We use the find command, which takes a regular expression as an argument:

>> find "Bug \d+ Submitted"
current page: http://example.com/bugs/post_bug.cgi
>> find "twill-generated bug"
current page: http://example.com/bugs/post_bug.cgi

No errors were reported, which means the validations succeeded. At this point, we can also inspect the current page via the show_html command in order to see the bug number that Bugzilla automatically assigned. I won't actually show all the HTML, suffice to say that the bug was assigned number 2. We can then go directly to the page for bug #2 and verify that the various bug elements we indicated were indeed posted correctly:

>> go "http://example.com/bugs/show_bug.cgi?id=2"
==> at http://example.com/bugs/show_bug.cgi?id=2
current page: http://example.com/bugs/show_bug.cgi?id=2
>> find "Linux"
current page: http://example.com/bugs/show_bug.cgi?id=2
>> find "P1"
current page: http://example.com/bugs/show_bug.cgi?id=2
>> find "grig@example.com"
current page: http://example.com/bugs/show_bug.cgi?id=2
>> find "twill-generated bug"
current page: http://example.com/bugs/show_bug.cgi?id=2
>> find "This is a new bug opened automatically via twill"
current page: http://example.com/bugs/show_bug.cgi?id=2

I mentioned that all the commands available in the interactive twill-sh command interpreter are also available as top-level functions to be used inside your Python code. All you need to do is import the necessary functions from the twill.commands module.

Here's how a Python script that tests functionality similar to the one I described above would look like:

#!/usr/bin/env python

from twill.commands import go, follow, showforms, fv, submit, find, code, save_html
import os, time, re

def get_bug_number(html_file):
h = open(html_file)
bug_number = "-1"
for line in h:
s = re.search("Bug (\d+) Submitted", line)
if s:
bug_number = s.group(1)
break
return bug_number

# MAIN
crt_time = time.strftime("%Y%m%d%H%M%S", time.localtime())
temp_html = "temp.html"

# Open a new bug report
go("http://www.
example.com/bugs")
follow("Enter a new bug report")

# Log in
fv("1", "Bugzilla_login", "grig@
example.com")
fv("1", "Bugzilla_password", "mypassword")
submit()
code("200")

# Enter bug info
fv("1", "op_sys", "Linux")
fv("1", "priority", "P1")
fv("1", "assigned_to", "grig@example
.com")
fv("1", "short_desc", "twill-generated bug at " + crt_time)
fv("1", "comment", "This is a new bug opened automatically via twill at " + crt_time)
submit()
code("200")

# Verify bug info
find("Bug \d+ Submitted")
find("twill-generated bug at " + crt_time)

# Get bug number
save_html(temp_html)
bug_number = get_bug_number(temp_html)
os.unlink(temp_html)

assert bug_number != "-1"

# Go to bug page and verify more detauled info
go("http://example.com/bugs/show_bug.cgi?id=" + bug_number)
code("200")
find("P1")
find("Linux")
find("grig@example.com")
find("This is a new bug opened automatically via twill at " + crt_time)

I added some extra functionality to the Python script -- such as adding the current time to the bug description, so that whenever the test script will be run, a different bug description will be inserted into the Bugzilla database (the current time doesn't of course guarantee uniqueness, but it will do for now :-) I also used the save_html function in order to save the "Bug posted" page to a temporary file, so that I can retrieve the bug number and query the individual bug page.

Conclusion

Twill is an excellent tool for testing Web applications. It can also be used to automate form handling, especially for Web sites that require a login. I especially like the fact that everything can be run from the command line -- both the twill shell and the Python scripts based on twill. This means that deploying twill is a snap, and there are no cumbersome GUIs to worry about. The assertion commands built into twill (code, find and notfind) should be enough for testing Web sites that use straight HTML and forms. For more complicated, Javascript-intensive Web sites, a tool such as Selenium might be more appropriate.

I haven't looked into twill's cookie-handling capabilities, but they're available, according to the README. Some more aspects of twill that I haven't experimented with yet:
  • Script recording: Titus has written a maxq add-on that can be used to automatically record twill-based scripts while browsing the Web site under test; for more details on maxq, see also a previous post of mine
  • Extending twill: you can easily add commands to the twill interpreter
Kudos to Titus for writing a powerful, yet easy to use testing tool.

Friday, September 16, 2005

CherryPy, Cheetah and PEAK on IBM developerWorks

I haven't read these articles yet and I wanted to have the links in one place for future reference:

Monday, September 12, 2005

Jakob Nielsen on Usability Testing

Do you spend one day per week on observing how new users interact with your product? In fact, do you have any usability testing at all in your budget? In the rare event that you do run usability testing sessions, do you focus on actual user behavior (and not waste time by having users fill in endless questionnaires)? If you tend to answer "no" to these questions, read Jakob Nielsen's article on how to properly conduct usability testing sessions.

Running a Python script as a Windows service

This is a message I posted to comp.lang.py regarding ways to run a regular Python script as a Windows service.

I will assume you want to turn a script called myscript.py into a service.

1. Install Win2K Resource Kit (or copy the 2 binaries instsrv.exe and srvany.exe).

2. Run instsrv to install srvany.exe as a service with the name myscript:
C:\Program Files\Resource Kit\instsrv myscript "C:\Program Files\Resource Kit\srvany.exe"

3. Go to Computer Management->Services and make sure myscript is listed as a service. Also make sure the Startup Type is Automatic.

4. Create a myscript.bat file with the following contents in e.g. C:\pyscripts:

C:\Python23\python C:\pyscripts\myscript.py

(replace Python23 with your Python version)

5. Create new registry entries for the new service.
  • run regedt32 and go to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\myscript entry
  • add new key (Edit->Add Key) called Parameters
  • add new entry for Parameters key (Edit->Add Value) to set the Application name
    • Name should be Application
    • Type should be REG_SZ
    • Value should be path to myscript.bat, i.e. C:\pyscripts\myscript.bat
  • add new entry for Parameters key (Edit->Add Value) to set the working directory
    • Name should be AppDir
    • Type should be REG_SZ
    • Value should be path to pyscripts directory, i.e. C:\pyscripts
6. Test starting and stopping the myscript service in Computer
Management->Services.

Michael Feathers on Unit Testing Rules

Short but high-impact post from Michael Feathers (of "Working Effectively with Legacy Code" fame). His main recommendation is to have unit tests that do not interact with the OS or other applications. Interactions to avoid include databases, sockets, even file systems. When you have a set of unit tests that run in isolation (and thus run very quickly), and when you have other sets of tests that do exercise all the interactions above, you are in a good position to quickly pinpoint who the culprit is when a test fails.

Friday, September 02, 2005

Recommended site: QA Podcast

I got an email from Darren Barefoot pointing me to a site he helped put together: QA Podcast. Very interesting stuff: interviews/conversations about software testing with folks who care and have something to say on this subject. I was glad to see that the podcasts published so far cover subjects such as performance testing and exploratory testing. I listened so far to a conversation on exploratory testing with James Bach and I already took away a ton of ideas I can apply in my testing activities.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...