Friday, December 15, 2006

Mock testing examples and resources

Mock testing is a very controversial topic in the area of unit testing. Some people swear by it, others swear at it. As always, the truth is somewhere in the middle. But first of all, let's ask Wikipedia about mock objects. Here's what it says:

"Mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A computer programmer typically creates a mock object to test the behavior of some other object, in much the same way that an automobile designer uses a crash test dummy to test the behavior of an automobile during an accident."

This is interesting, because it talks about accidents, which in software development speak would be errors and exceptions. And indeed, I think one of the main uses of mock objects is to simulate errors and exceptions that would otherwise be very hard to reproduce.

Let's get some terminology clarified: when people say they use mock objects in their testing, in most cases they actually mean stubs, not mocks. The difference is expanded upon with his usual brilliance by Martin Fowler in his article "Mocks aren't stubs". I'll let you read that article and draw your own conclusions. Here are some of mine: stubs are used to return canned data to your methods or functions under test, so that you can make some assertions on how your program reacts to that data (here, I use "program" as shorthand for "method or function under test", not for executable or binary.) Mocks, on the other hand, are used to specify certain expectations about how the methods of the mocked object are called by your program: how many times, with how many arguments, etc.

In my experience, stubs are more useful than mocks when it comes to unit testing. You should still use a mock library or framework even when you want to use stubs, because these libraries make it very easy to instantiate and work with stubs -- as we'll see in some of the examples I'll present.

I said that mock testing is a controversial topic. If you care to follow the exchange of comments I had with Bruce Leggett on this topic, you'll see that his objections to mocking are very valid. His main point is that if you mock an object and the interface or behavior of that object changes, your unit tests which use the mock will pass happily, when in fact your application will fail.

I thought some more about Bruce's objections, and I think I can come up with a better rule of thumb now than I could when I replied to him. Here it is: use mocking at the I/O boundaries of your application and mock the interactions of your application with external resources that are not always under your control.

When I say "I/O boundaries", I mean mostly databases and network resources such as Web servers, XML-RPC servers, etc. The data that these resources produce is consumed by your application, and it often contains some randomness that makes it very hard for your unit tests to assert things about it. In this case, you can use a stub instead of the real external resource and you can return canned data from the stub. This gives you some control over the data that is consumed by your program and allows you to make more meaningful assertions about how your program reacts to that data.

These external resources are also often unreachable due to various error conditions which again are not always under your control, and which are usually hard to reproduce. In this case, you can mock the external resource and simulate any errors or exceptions you want, and see how your program reacts to them in your unit tests. This relates to the "crash test dummy" concept from the Wikipedia article.

In most cases, the external resources that your application needs are accessed via stable 3rd party libraries or APIs whose interfaces change rarely. For example, in Python you can use standard library modules such as urllib or xmlrpclib to interact with Web servers or XML-RPC servers, or 3rd party modules such as cxOracle or MySQLdb to interact with various databases. These modules, either part of the Python stdlib or 3rd party, have well defined interfaces that rarely if ever change. So you have a fairly high degree of confidence that their behavior won't change under you at short notice, and this makes them good candidates for mocking.

I agree with Bruce that you shouldn't go overboard with mocking objects that you create in your own application. There's a good chance the behavior/interface of those objects will change, and you'll have the situation where the unit tests which use mock versions of these objects will pass, when in fact the application as a whole will fail. This is also a good example of why unit tests are not sufficient; you need to exercise your application as a whole via functional/integration/system testing (here's a good concrete example why). In fact, even the most enthusiastic proponents of mock testing do not fail to mention the need for testing at higher levels than unit testing.

Enough theory, let's see some examples. All of them use Dave Kirby's python-mock module. There are many other mock libraries and modules for Python, with the newest addition being Ian Bicking's minimock module, which you should definitely check out if you use doctest in your unit tests.

The first example is courtesy of Michał, who recently added some mock testing to the Cheesecake unit tests. This is how cheesecake_index.py uses urllib.urlretrieve to retrieve a package in order to investigate it:

try:
downloaded_filename, headers = urlretrieve(self.url, self.sandbox_pkg_file)
except IOError, e:
self.log.error("Error downloading package %s from URL %s" % (self.package, self.url))
self.raise_exception(str(e))
if headers.gettype() in ["text/html"]:
f = open(downloaded_filename)
if re.search("404 Not Found", "".join(f.readlines())):
f.close()
self.raise_exception("Got '404 Not Found' error while trying to download package ... exiting")
f.close()

To test this functionality, we used to have a unit test that actually grabbed a tar.gz file from a Web server. This was obviously sub-optimal, because it required the Web server to be up and running, and it couldn't reproduce certain errors/exceptions to see if we handle them correctly in our code. Michał wrote a mocked version of urlretrieve:

def mocked_urlretrieve(url, filename):
if url in VALID_URLS:
shutil.copy(os.path.join(DATA_PATH, "nose-0.8.3.tar.gz"), filename)
headers = Mock({'gettype': 'application/x-gzip'})
elif url == 'connection_refused':
raise IOError("[Errno socket error] (111, 'Connection refused')")
else:
response_content = '''
HTML_INCLUDING_404_NOT_FOUND_ERROR
''''
dump_str_to_file(response_content, filename)
headers = Mock({'gettype': 'text/html'})

return filename, headers
(see the _helper_cheesecake.py module for the exact HTML string returned, since Blogger refuses to include it because of its tags)

The Mock class from python-mock is used here to instantiate and mock the headers object returned by urlretrieve. When you do:
headers = Mock({'gettype': 'text/html'})
you get an object which has all its methods stubbed out and returning None, with the exception of the one method you specified, gettype, which in this case will return the string 'text/html'.

This is the big advantage of using a library such as python-mock: you don't have to manually stub out all the methods of the object you want to mock; instead, you simply instantiate that object via the Mock class, and let the library handle everything for you. If you don't specify anything in the Mock constructor, all the methods of the mocked object will return None. In our case, since cheesecake_index.py only calls header.gettype(), we were only interested in this method, so we specified it in the dictionary passed to the Mock class, along with its return value.

The mocked_urlretrieve function inspects its first argument, url, and, based on its value, either copies a tar.gz file into a target location (indicated by filename) for further inspection, or raises an IOError exception, or returns an HTML document with a '404 Not Found' error. This illustrates the usefulness of mocking: it avoids going to an external resource (a Web server in this case) to retrieve a file, and instead it copies it from the file system to another location on the file system; it simulates an exception that would otherwise be hard to reproduce consistently; and it returns an error which also would be hard to reproduce. Now all that remains is to exercise this mocking functionality in some unit tests, and this is exactly what test_index_url_download.py does, by exercising 3 test cases: valid URL, invalid URL (404 error) and unreachable server. Just to exemplify, here's how the "Connection refused" exception is tested:

try:
self.cheesecake = Cheesecake(url='connection_refused',
sandbox=default_temp_directory, logfile=logfile)
assert False, "Should throw a CheesecakeError."
except CheesecakeError, e:
print str(e)
msg = "Error: [Errno socket error] (111, 'Connection refused')\n"
msg += "Detailed info available in log file %s" % logfile
assert str(e) == msg

You might have a question at this point: how did we make our application aware of the mocked version of urlretrieve? In Java, where the mock object techniques originated, this is usually done by what is called "dependency injection". This simply means that the mocked object is passed to the object under test (OUT) either via the OUT's constructor, or via a setter method of the OUT's. In Python, this is absolutely unnecessary, because of one honking great idea called namespaces. Here's how Michał did it:
import cheesecake.cheesecake_index as cheesecake_index
from _helper_cheesecake import mocked_urlretrieve
cheesecake_index.urlretrieve = mocked_urlretrieve
What happens here is that the urlretrieve name used inside the cheesecake_index module is simply reassigned and pointed to the mocked_urlretrieve function. Very simple and elegant. This way, the OUT, in our case the cheesecake_index module, is completely unchanged and blissfully unaware of any mocked version of urlretrieve. It is only in the unit tests that we reassign urlretrieve to its mocked version. Further proof, if you needed one, of Python's vast superiority over Java :-)

The second example is courtesy of Karen Mishler from ARINC. She used the python-mock module to mock an interaction with an external XML-RPC server that produces avionics data. In this case, the module that gets mocked is xmlrpclib (I changed around some names of servers and methods and I got rid of some information which is not important for this example):

fakeResults = {
"Request":('|returncode|0|/returncode|',
'|machineid|fakeServer:81:4080|/machineid|'),
"Results":('|returncode|0|/returncode|',
'|origin|ABC|/origin|\n|destination|DEF|/destination|\n'),
}
mockServer = Mock(fakeResults)
xmlrpclib = Mock({"Server":mockServer})

(I replaced the XML tag brackets with | because Blogger had issues with the tags....Beta software indeed)

Karen mocked the Server object used by xmlrpclib to return a handle to the XML-RPC server. When the application calls xmlrpclib.Server, it will get back the mockServer object. When the application then calls the Request or Results methods on this object, it will get back the canned data specified in the fakeResults dictionary. This completely avoids the network traffic to and from the real XML-RPC server, and allows the application to consume specific data about which the unit tests can make more meaningful assertions.

The third example doesn't use mocking per se, but instead illustrates a pattern sometimes called "Fake Object"; that is, replacing an object that your application depends on with a more lightweight and faster version to be used during testing. A good example is using an in-memory database instead of a file system-based database. This is usually done to speed up the unit tests and thus have more frequent continuous integration runs.

The MailOnnaStick application that Titus and I presented at our PyCon06 tutorial uses Durus as the back-end for storing mail message indexes. In the normal functionality of the application, we store the data on the file system using the FileStorage functionality in Durus (see the db.py module). However, Durus also provides MemoryStorage, which we decided to use for our unit tests via the mockdb.py module. In this case, mockdb is actually a misnomer, since we're not actually mocking or stubbing out methods of the FileStorage version, but instead we're reimplementing that functionality using the faster MemoryStorage. You can see how we use mockdb in our unit tests by looking at the test_index.py unit test module. Python namespaces come to the rescue again, since we don't have to make index.py, the consumer of the database functionality, aware of any mocking-related changes, except inside the unit test. In the test_index.py unit test, we reassign the index.db name to mockdb:
from mos import index, mockdb
index.db = mockdb
Speaking of patterns, I found very thorough explanations of unit testing patterns at the xUnit Patterns Web site. Sometimes the explanations are too thorough, if I may say so -- too much hair splitting going on -- but overall it's a good resource if you're interested in the more subtle nuances of Test Stubs, Test Doubles, Mock Objects, Test Spies, etc.

Mock testing is being used pretty heavily in Behavior-Driven Development (BDD), which I keep hearing about lately. I haven't looked too much into BDD so far, but from the little I've read about it, it seems to me that it's "just" syntactic sugar on top of the normal Test-Driven Development process. They do emphasize good naming for the unit tests, which if done to the letter turns the list of unit tests into a specification for the behavior of the application under test (hence the B in BDD). I think this can be achieved by properly naming your unit test, without necessarily resorting to tools such as RSpec. But I may be wrong, and maybe BDD is a pretty radical departure from TDD -- I don't know yet. It's worth checking it out in any case.

I'll finish by listing some Web sites and articles related to mock testing. Enjoy!

Mind maps and testing

Jonathan Kohl, whose blog posts are always very insightful, writes about using mind maps to visualize software testing mnemonics (FCC CUTS VIDS; each letter represents an area of functionality within a product where testing efforts can be applied.) He finds that a mind map goes beyond the linearity of a list of mnemonics and gives testers a home base from which they can venture out into the product and explore/test new areas. Jonathan's findings match my experiences in using mind maps.

Thursday, December 14, 2006

"The Problem with JUnit" article

Simon Peter Chappell posted a blog entry on "The Problem with JUnit". The title is a bit misleading, since Simon doesn't really have a problem with JUnit per se. His concern is that this tool/framework is so ubiquitous in the Java world, that people new to unit testing think that by simply using it, they're done, they're "agile", they're practicing TDD.

Simon's point is that JUnit is just a tool, and as such it cannot magically make you write good unit tests. This matches my experience: writing unit tests is hard. It's less important what tool or framework you use; what matters is that you cover as many scenarios as possible in your unit tests. What's more, unit tests are definitely necessary, but also definitely not sufficient for a sound testing strategy. You also need comprehensive automated functional and integration tests, and even (gasp) GUI tests. Just keep in mind Jason Huggins's FDA-approved testing pyramid.

Simon talks about how JUnit beginners are comfortable with "happy path" scenarios, but are often clueless about testing exceptions and other "sad path" conditions. This might partly be due to the different mindset that developers and testers have. When you write tests, you need to put your tester hat on and try breaking your software, as well as making sure it does what it's supposed to do.

In the Python testing world, we are fortunate to have a multitude of unit test tools, from the standard library unittest and doctest to tools and frameworks such as py.test, nose, Testoob, testosterone, and many others (see the Unit Testing Tools section of the PTTT for more details). There is no tool that rules them all, such as JUnit in the Java world, and I think this is a good thing, since it allows people to look at different ways to write their unit tests, each with their own strengths and weaknesses. But tools are not enough, as Simon points out, and what we need are more articles/tutorials/howtos on techniques and strategies for writing good tests, be they unit, functional, etc. I'm personally looking forward to read Roy Osherove's book "The Art of Unit Testing" when it will be ready. You may also be interested in some of my articles on testing and other topics. And the MailOnnaStick tutorial wiki might give you some ideas too.

Switched to Blogger Beta

I apologize if your RSS feed reader is suddenly swamped with posts from my blog. It's hopefully a one-time thing due to my having switched my blog to Blogger Beta.

Wednesday, December 13, 2006

Hungry for cheesecake?

If you are, search for "cheesecake" using Google Code Search. If you do, you'll get a unit test from the Cheesecake project as the very first result. Clearly, Google have their act together! :-)

Tuesday, December 05, 2006

"Scrum and XP From the Trenches" report

This just in via the InfoQ blog: a report (PDF) written by Henrik Kniberg with the intriguing title "Scrum and XP From the Trenches". Haven't read all of it yet, but the quote from the report included at the end of the InfoQ blog post caught my attention:

"I've probably given you the impression that we have testers in all Scrum teams, that we have a huge acceptance test team for each product, that we release after each sprint, etc., etc. Well, we don't. We've sometimes managed to do this stuff, and we've seen that it works when we do. But we are still far from an acceptable quality assurance process, and we still have a lot to learn there."

Testing is hard. But testing can also be fun!

Friday, December 01, 2006

"Performance Testing with JUnitPerf" article

Andrew Glover, who has been publishing a series of articles related to code quality on IBM developerWorks, talks about "Peformance Testing with JUnitPerf". The idea is to decorate your unit tests with timing constraints, so that they also become performance tests. If you want to do the same in Python, I happen to know about pyUnitPerf, the Python port of JUnitPerf. Here is a blog post/tutorial I wrote a while ago on pyUnitPerf.

PyCon news

I was very glad to see that the 3 proposals I submitted to PyCon07 were accepted: a "Testing Tools in Python" tutorial presented jointly with Titus, a "Testing Tools Panel" that I will moderate, and a talk on the Pybots project. The complete list of accepted talks and panels is here.

Here are the brief description and the outline for the Testing Tools tutorial that Titus and I will present. We will cover much more than just testing tools actually -- we'll talk about test and development techniques and strategies. It should be as good or better than the one we gave last year, which attracted a lot of people.

The Testing Tools Panel has a Wiki page. If you're interested in attending, please consider adding questions or topics of interest to you. If there is enough interest, I'm thinking about also organizing a BoF session on Testing Tools and Techniques, since the panel's duration will be only 45 minutes.

Finally, my Pybots talk will consist of an overview of the Pybots project: I will talk about the setup of the Pybots buildbot farm, about the issues that the Pybots farm has helped uncover,
and also about lessons learned in building, sustaining and growing an open-source community project.

The program for PyCon07 looks very solid, with a lot of interesting talks and tutorials. I'm very much looking forward to the 4 days I'll spend in beautiful Addison, TX :-)

Tuesday, November 21, 2006

Good Unix-related blog

Vladimir Melnikoff brought his blog to my attention: "Nothing but Unix". Good resource for Unix enthusiasts, mostly composed of industry-related news.

Python Fuzz Testing Tools

Ian Bicking suggested I create a new category in the Python Testing Tools Taxonomy: Fuzz Testing or Fuzzing. Done. If you're not familiar with the term, see the Wikipedia article which talks about this type of testing. Here's an excerpt: "The basic idea is to attach the inputs of a program to a source of random data ("fuzz"). If the program fails (for example, by crashing, or by failing built-in code assertions), then there are defects to correct. The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior."

Ian told me about the Peach Fuzzer Framework. I was familiar with Pester (the home page talks about a Java tool called Jester, and it has links to the Python version called Pester); I also googled some more and found other Python fuzzing tools such as antiparser and Taof, which are both geared towards fuzzing network protocols. In fact, many fuzzing tools are used in security testing because they can aid in attacking software via random inputs. See this Hacksafe article on "Fuzz testing tools and techniques" and this PacketStorm list of fuzzing tools. Another good overview is Elliotte Harold's developerWorks article on fuzz testing. Very interesting stuff. If the "Python Testing Tools" tutorial Titus and I proposed for PyCon gets accepted, expect to see some fuzz testing included in our arsenal :-)

I also added Ian's minimock tool to the PTTT page. Very cool minimal approach to mock testing, achieved by embedding mocking constructs in doctests.

In other testing-related blog posts, Titus talks about the difficulty of retrofitting testing to an existing application (even when you wrote the testing tools!), and Max Ischenko presents some uber-cool plugins which integrate nose into vim.

Thursday, November 02, 2006

"Swap space management" article at IBM developerWorks

From IBM developerWorks, a very nice summary of the issues involved in setting up and maintaining swap space on *nix systems: "Swap space management and tricks".

Wednesday, November 01, 2006

Daniel Read on software and Apgar scores

Daniel Read blogs on the topic: "Does software need an Apgar score?". He mentions the fact that a simple metric (the Apgar score for newborns) revolutionized the childbirth process, "through standardization of techniques, training, and regulation of who exactly was allowed to perform certain procedures (based on whether they had the training and experience)". He then talks about how a similar simple score might help the quality of software development, by assessing its "health". Hmmm... all this sounds strangely familiar to me -- Cheesecake anybody? Of course, Daniel accepts that this idea is highly controversial and maybe a bit simplistic. However, I for one am convinced that it would help with improving, if not the quality, then at least the kwalitee of the software packages we see in the wild today.

Thursday, October 26, 2006

Got Edge?

I mean Edgy. I mean Edgy Eft. Get it.

Update

I followed the EdgyUpgrades document from the Ubuntu Wiki and all it took to upgrade my Dell laptop from Dapper to Edgy was one command:

gksu "update-manager -c" 

I call this painless. Everything seems to be working just fine after the upgrade. Haven't had time to play with it at all -- in fact, I don't even know how long the upgrade process took, since I left after I started it.

Proposal for Testing Tools Panel at PyCon07

Following Titus's example with his Web Frameworks Panel proposal for PyCon07, I proposed a Testing Tools Panel. And yes, I expect the author of twill to participate and take questions :-)

I created a TestingToolsPanel page on the PyCon07 wiki. Please feel free to add your own testing-related topics of interest and/or questions for the authors. If you are a testing tool author, please consider participating in the panel. You can leave a comment here or send me an email (grig at gheorghiu.net) and let me know if you're interested in participating.

Here's what I have so far on the Wiki page:

I maintain a "Python Testing Tools Taxonomy" (PTTT) Wiki page.

Here are some of the tools listed on the PTTT page:
  • unit testing tools (unittest/doctest/py.test/nose/Testoob/testosterone)
  • mock/stub testing tools (python-mock/pmock/stubble)
  • Web testing tools (twill/webunit/zope.testbrowser/Pamie/paste.test.fixture)
  • acceptance testing tools (PyFit/texttest/FitLoader)
  • GUI testing tools (pywinauto/WATSUP/winGuiAuto/guitest)
  • source code checking tools (pylint/pychecker/pyflakes)
  • code coverage tools (coverage/figleaf/Pester)
  • other miscellaneous testing tools (pysizer/pymetrics/svnmock/testtools)

I propose to have a panel where authors of some of these tools would discuss and take questions on topics such as:
  • what need prompted the creation of the testing tool
  • what type of testing does the tool belong to (unit, functional, acceptance, system, performance)
  • what specific features does the tool offer that other tools lack
  • what are the most common testing scenarios you have seen in your user base
  • are there any platform- or OS-specific gotchas related to the tool
  • how extensible is the tool (plugins etc.)
  • how easy to learn is the tool
  • how well tested is the tool
  • how well documented is the tool

Thursday, October 19, 2006

"Agile in action" photostream

From a blog I read with pleasure, Simon Baker's "Agile in action", here's a link to a Flickr photostream that shows in my opinion what agile is all about: collaboration, camaraderie, storytelling....in short, having great fun and producing great software in the process. Stevey, I can tell you've never been part of an agile team in your life -- otherwise why would you be so bitter and cranky about it?

Monday, October 09, 2006

The 90-9-1 rule and building an open source community

Jakob Nielsen talks about the 90-9-1 rule in his latest Alertbox newsletter: "Participation inequality: encouraging more users to contribute". Simply put, the rule states that in a typical online community, 90% of the users are lurkers, 9% are occasional contributors, and only 1% are active contributors. This should be interesting for people trying to build and grow open source projects. Nielsen has some suggestions to offer on how to overcome this "participation inequality". Read the article for his suggestions.

Here are some of my own observations and lessons learned from various open source efforts I've been part of (many of them are things I've tried to do on the Pybots project):

How to build an open source community

* Blog, blog, blog
* Send messages to mailing lists related to the area of your project
* Write extensive documentation, make it easy for people to join
* Create a project repository (Google Code)
* Get help from early adopters, involve them in the project

How to sustain and grow an open source community

* Blog, blog, blog
* Send messages to individuals who might be interested in contributing
* Acknowledge contributions
* Respond quickly to issues on mailing list
* Demonstrate usefulness of the project, both to contributors, and to any organizations involved (e.g. the PSF)
* Market/promote/evangelize the project tirelessly
* Recommended reading: "Fearless change: Patterns for introducing new ideas" by Mary Lynn Manns and Linda Rising

Comments about your own experience in building an open source community are much appreciated.

Thursday, October 05, 2006

Let's celebrate Roundup by turning it into the official Python bug tracker

Richard Jones just posted a note about Roundup turning 5. What better birthday gift than turning it into the official Python bug/issue tracker. Readers of Planet Python certainly know by now that there are 2 issue trackers in contention: JIRA and Roundup. Unless people step up to volunteer as admins for maintaining a Roundup-based Python issue tracker, the PSF will choose JIRA. I walked the walk and volunteered myself. I know there must be other people out there who would like to see a Python-based project be selected over a Java project. Any takers? Send an email by Oct. 16th to infrastructure at python.org stating your intention to volunteer. All it takes is 8 to 10 people.

Pybots news

I'm happy to report that the Pybots project continues to gain momentum. In raw numbers, we have 8 buildslaves running the automated tests for 17 projects, and also testing the installation of 18 other packages. Pretty impressive, if I may say so myself. This table is copied from the main pybots.org page and shows the current setup:

Builder name Project(s) tested Pre-requisites installed Owner
x86 Red Hat 9 Twisted setuptools, zope.interface, pycrypto, pyOpenSSL Grig Gheorghiu
x86 Debian Unstable docutils, roundup N/A Seo Sanghyeon
x86 Ubuntu Dapper parsedatetime setuptools Mike Taylor (Bear)
x86 OSX vobject, zanshin setuptools, zope.interface, Twisted Mike Taylor (Bear)
x86 Gentoo pysqlite, Genshi, Trac, feedvalidator clearsilver, pysqlite Manuzhai
amd64 Ubuntu Dapper MySQLdb, CherryPy N/A Elliot Murphy
x86 Windows 2003 lxml (dev, stable) Bazaar (dev, stable) libxml2, libxslt, zlib, iconv Sidnei da Silva
x86 Ubuntu Breezy Cheesecake setuptools, nose, logilab-astng, pylint Grig Gheorghiu

Some more projects and buildslaves are in the pipeline, so I hope to be able to announce them soon. I'd like to thank all the contributors so far, in chronological order of their contributions: Seo Sanghyeon, Mike Taylor (Bear), Manuzhai, Elliot Murphy and Sidnei da Silva.

People interested in this project -- whether they'd like their project to be tested on an existing buildslave, or they'd like to contribute a buildslave -- are encouraged to peruse the documentation on the Pybots page, then send a message to the Pybots mailing list.

Friday, September 22, 2006

Notepad++ rocks

I heard about Notepad++ from Michael Carter, who showcased SQLAlchemy at our last SoCal Piggies meeting and used Notepad++ to edit his Python files. I downloaded it this week and all I can say is that it rocks! For Windows users, it's one of the best editors I've ever seen, and of course it's completely free. Notepad++ is based on the Scintilla editor, and it does syntax coloring for a gazillion languages (Python included of course), it includes a huge number of plugins, such as a Hex Editor, etc., etc. Highly recommended!

Tuesday, September 19, 2006

Buildbot used for continuous integration in the Gnome project

José Dapena Paz has a nice write-up on how the Gnome Build Brigade is using buildbot for continuous integration of all the projects under the Gnome umbrella. Buildbot scores again!

Monday, September 18, 2006

Any projects that need Pybots buildslaves?

Know of any projects that need Pybots buildslaves?

This is a question that's been asked twice already on the Pybots mailing list, or in emails addressed directly to me. Every time I answered that I don't really know any, and I advised the people asking the question to go through the list of their favorite Python projects, and pick some that they'd like to test. Or go through the list of major Web frameworks and pick some.

Anyway -- maybe a better way is to ask all the readers of this blog: do you have a project that you'd like to test in the Pybots buildbot farm, but maybe you don't have the hardware to run the buildslave on? Then let me know, or even better, let the Pybots mailing list know. There are people such as Jeff McNiel and Skip Montanaro who have hardware and resources, and are looking for projects to test.

BTW, the Pybots farm now has 6 buildslaves (the 6th was courtesy of Elliot Murphy, who contributed an AMD-64 Ubuntu Dapper box running the MySQLdb tests), with 3 or 4 buildslaves on the way.

Thanks a lot to all the people who have contributed or will contribute soon. It's very rewarding to see the Python community responding in an enthusiastic manner to a call to do better testing of Python projects.

Titus has a new blog

If you've been reading Titus's advogato blog, you'll be interested in knowing that he has a new blog: "Daily Life in an Ivory Basement". Will Guaraldi will be happy to know the blog is based on pyblosxom.

Tuesday, September 12, 2006

Pybots project keeps rolling

Some updates on the Pybots project:
  • Mike Taylor aka Bear from OSAF contributed 2 buildslaves: an Ubuntu Dapper box for testing his parsedatetime package, and an Intel Mac OSX box for testing two libraries used by OSAF -- vobject and zanshin
  • Manuzhai contributed a Gentoo box for testing pysqlite and Trac
  • Elliot Murphy from mysql.com will contribute an AMD-64 Ubuntu Dapper box for testing MySQLdb
  • In summary, we're up to 5 (soon to be 6) buildslaves, with more on the way
  • Seo Sanghyeon created a Google Code project for pybots; he and I are the current admins for this project; you can browse the Subversion repository for various scripts and buildbot config. files
  • As a cool side note, Sanghyeon rewrote the home page for pybots.org in reST; the page is kept in subversion and the server hosting pybots.org is doing a svn update every hour, followed by a rest2html call
As I said in a previous post, the Pybots setup already proved its usefulness by uncovering issues with new keywords such as 'with' and 'as'. Some of the projects affected by these new keywords are zope.interface, roundup and zanshin. According to the Zope developers, the issue had been fixed a while ago in the svn repository, but no release has happened since. The roundup developers already fixed the issue -- nice to see this.

Also, the Trac unit tests, when running against the latest Python trunk, are failing with an ugly backtrace. If somebody can shed some light, please do.

Again, if you're interested in running a Pybots buildslave, take a look at the various pieces of documentation at pybots.org and send a message to the Pybots mailing list.

Friday, September 08, 2006

Pay attention to the new 'with' and 'as' keywords

As of Sept. 6th (revision 51767), Python 2.6 has two new keywords: with and as. Python code that uses either one of these words as a variable name will be in trouble. How do I know that? Because the Twisted unit tests have been failing in the Twisted Pybots buildslave ever since. Actually the issue is not with Twisted code, but with zope.interface, which is one of the Twisted pre-requisites. Here's the offending code:

Traceback (most recent call last):
File "/tmp/Twisted/bin/trial", line 23, in
from twisted.scripts.trial import run
File "/tmp/Twisted/twisted/scripts/trial.py", line 10, in
from twisted.application import app
File "/tmp/Twisted/twisted/application/app.py", line 10, in
from twisted.application import service
File "/tmp/Twisted/twisted/application/service.py", line 20, in
from twisted.python import components
File "/tmp/Twisted/twisted/python/components.py", line 37, in
from zope.interface.adapter import AdapterRegistry
File "/tmp/python-buildbot/local/lib/python2.6/site-packages/zope/interface/adapter.py", line 201
for with, objects in v.iteritems():
^
SyntaxError: invalid syntax

It would be great if the Zope folks fixed their code so that the Twisted tests will start passing again. This issue actually impacts all packages that depend on zope.interface -- for example zanshin, which also fails in the Pybots buildslave for the OSAF libraries (graciously set up by Bear from OSAF).

If you're interested in following such issues as they arise in the Pybots buildbot farm, I encourage you to subscribe to the Pybots mailing list.

Update

I mentioned the 'with' keyword already causing problems. As it turns out, Seo Sanghyeon's buildslave, which is running tests for docutils and roundup, uncovered an issue in roundup, related to the 'as' keyword:

Traceback (most recent call last):
File "run_tests.py", line 889, in
process_args()
File "run_tests.py", line 879, in process_args
bad = main(module_filter, test_filter, libdir)
File "run_tests.py", line 671, in main
runner(files, test_filter, debug)
File "run_tests.py", line 585, in runner
s = get_suite(file)
File "run_tests.py", line 497, in get_suite
mod = package_import(modname)
File "run_tests.py", line 489, in package_import
mod = __import__(modname)
File "/home/buildslave/pybots/roundup/./test/test_actions.py", line 6, in
from roundup import hyperdb
File "/home/buildslave/pybots/roundup/roundup/hyperdb.py", line 29, in
import date, password
File "/home/buildslave/pybots/roundup/roundup/date.py", line 735
as = a[0]
^
SyntaxError: invalid syntax

Wednesday, September 06, 2006

Pybots update

I'm happy to report that the Pybots project got its first user other than yours truly. Seo Sanghyeon set up a buildslave running on Debian Unstable which is running the docutils unit tests every time a checkin is made into Python trunk or in the 2.5-maint branch.

Marc-Andre Lemburg also offered to run a buildslave for the egenix-mx-base tests, while Manuzhai offered to run a buildslave for the Trac tests. Skip Montanaro also expressed interest in running a buildslave, but he hasn't decided on a project yet.

You can see the current pybots buildslaves here. Expect to see more active buildslaves in the next few days.

Marc-Andre suggested I write a Pybots FAQ and put some info on the Python wiki, so here they are:

Friday, August 18, 2006

On the importance of functional testing

I did not need further proof of the fact that functional tests are a vital piece in a project's overall testing strategy. I got that proof anyway last night, while watching the Pybots buildmaster status page. I noticed that the Twisted unit tests were failing, but not because of errors within the Twisted package, but because pre-requisite packages such as ZopeInterface could not be installed anymore. If you followed my post on setting up a Pybots buildslave, you know that before running the Twisted unit tests, I attempt to instal ZopeInterface and other packages using the newly-built python binary, via "/tmp/python-buildbot/local/bin/python setup.py install".

Well, all of a sudden last night this last command was failing with errors such as:
error: invalid Python installation:
unable to open /tmp/python-buildbot/local/lib/python2.6/config/Makefile
(No such file or directory)

This proved to be was a transient error, due to some recent checkins that modified the release numbers in the python svn trunk from 2.5 to 2.6. This issue was fixed within an hour, but the interesting thing to me was that, while this step was failing in the Pybots Twisted tests, the Python builbots running the Python-specific unit tests against the trunk were merrily chugging along, green and happy (at least on Unix platforms). This was of course to be expected, since nothing major changed as far as the internal Python unit tests were concerned. However, when running a functional test involving the newly-built Python binary -- and in my case that functional test consisted simply in running "python setup.py install" on some packages -- things started to break.

Lesson learned? Always make sure you test your application from the outside in, by exercising it as a user would. Unit tests are necessary (indeed, they are essential), but they are not sufficient by any means. A 'holistic' test strategy takes into consideration both white-box-type unit tests, and black-box-type functional tests. Of course, the recommended way of running all these types of tests is via a continuous integration tool such as buildbot.

Thursday, August 17, 2006

QA blog at W3C

Karl Dubost sent me a message about some issues he had with running Cheescake on a Mac OS X machine. It turned out he was using an ancient version of Cheesecake, although he ran "easy_install Cheesecake". I told him to upgrade to the latest version via "easy_install Cheesecake==0.6" and his problems dissapeared.

Anyway, this is not what I was trying to blog about. Reading his email signature, I noticed he works as a Conformance Manager at W3C. Karl also mentions a QA blog at W3C in his signature. Very interesting blog, from the little I've seen so far. For example, from the "Meet the Unicorn" post, I found out about a W3C project (code-name Unicorn) which aims to be THE one tool to use when you want to check the quality -- i.e. the W3C conformance I suspect -- of web pages. This tool would "gather observations made on a single document by various validators and quality checkers, and summarize all of that neatly for the user." BTW, here is a list of validators and other test tools that you can currently use to check the conformance of your web pages.

Added the blog to my fluctuating collection of Bloglines feeds...Thanks, Karl!

Setting up a Pybots buildslave

If you're interested in setting up a buildbot buildslave for the Pybots project, here are some instructions:

Step 1

Install buildbot on your machine. Instructions can be found here, here, here and here.

Step 2

Create a user that will run the buildbot slave process. Let's call it buildslave, with a home directory of /home/buildslave. Also create a /home/buildslave/pybot directory.

Step 3

Create the file buildbot.tac in /home/buildslave/pybot, with content similar to this:

from twisted.application import service
from buildbot.slave.bot import BuildSlave

# set your basedir appropriately
basedir = r'/home/buildslave/pybot'
host = 'www.python.org'
port = 9070
slavename = 'abc'
passwd = 'xyz'
keepalive = 600
usepty = 1

application = service.Application('buildslave')
s = BuildSlave(host, port, slavename, passwd, basedir, keepalive, usepty)
s.setServiceParent(application)


Step 4

Create a python-tool directory under /home/buildslave/pybots. You must name this directory python-tool, as the buildmaster will use this name in the build steps.

Step 5

Create a file called run_tests.py under the python-tool directory. This is where you will invoke the automated tests for your projects.

How this all works

The buildmaster will have your buildslave execute the following steps, every time a check-in is made into the python subversion trunk (and also every time a check-in is made in the 2.5 branch):

1. Update step: runs "svn update" from the python svn trunk
2. Configure step: runs "./configure --prefix=/tmp/python-buildbot/local"
3. Make step: runs "make all"
4. Test step: runs "make test" (note: this step runs the python unit tests, not your project's unit tests)
5. Make install step: runs "make install"; this will install the newly-built python binary in /tmp/python-buildbot/local/bin/python
6. Project-specific tests step: this is when your run_tests.py file will be run via "/tmp/python-buildbot/local/bin/python ../../python-tool/run_tests.py"
7. Clean step: runs "make distclean"

Important note: since your automated tests will be run via the newly-built python binary installed in /tmp/python-buildbot/local/bin/python, you need to make sure you install all the pre-requisite packages for your package using this custom python binary, otherwise your unit tests will fail because they will not find these pre-requisites. For example, for the Twisted unit tests, I had to install setuptools, ZopeInterface, pycrypto and pyOpenSSL, before I could actually run the Twisted test suite.

So in my run_tests.py file I first call a prepare_packages.sh shell script, before I launch the actual test suite (I copied the pre-requisite packages in /home/buildslave):

$ cat prepare_packages.sh

#!/bin/bash

cd /tmp

rm -rf setuptools*
cp ~/setuptools-0.6c1.zip .
unzip setuptools-0.6c1.zip
cd setuptools-0.6c1
/tmp/python-buildbot/local/bin/python setup.py install
cd ..

rm -rf ZopeInterface*
cp ~/ZopeInterface-3.1.0c1.tgz .
tar xvfz ZopeInterface-3.1.0c1.tgz
cd ZopeInterface-3.1.0c1
/tmp/python-buildbot/local/bin/python setup.py install
cd ..

rm -rf pycrypto-2.0.1*
cp ~/pycrypto-2.0.1.tar.gz .
tar xvfz pycrypto-2.0.1.tar.gz
cd pycrypto-2.0.1
/tmp/python-buildbot/local/bin/python setup.py install
cd ..

rm -rf pyOpenSSL-0.6*
cp ~/pyOpenSSL-0.6.tar.gz .
tar xvfz pyOpenSSL-0.6.tar.gz
cd pyOpenSSL-0.6
/tmp/python-buildbot/local/bin/python setup.py install
cd ..

rm -rf Twisted
svn co svn://svn.twistedmatrix.com/svn/Twisted/trunk Twisted

Then I call the actual Twisted test suite, via:

/tmp/python-buildbot/local/bin/python -Wall /tmp/Twisted/bin/trial --reporter=bwverbose --random=0 twisted

You can see the current Pybots status page here.

If you are interested in setting up your own buildslave to participate in the Pybots project, please send a message to the Pybots mailing list. I will send you a slavename and a password, and then we can test the integration of your buildslave with the buildmaster.

Update 10/16/09

I realized that these instructions for setting up a Pybot buildslave are a bit outdated. Discussions on the Pybots mailing list prompted certain changes to run_tests.py, even though you're still OK if you follow the instructions above to the letter.

Here are some enhancements that you can take advantage of:

1. You can test several projects, each in its own build step, simply by having your run_tests.py script be aware of an extra command-line argument, which will be the name of the project under tests. An example of such a script is here: run_tests.py. The script receives a command-line argument (let's call it proj_name) and invokes a shell script called proj_name.sh. The shell script checks out the latest code for project proj_name (or downloads the latest distribution), then runs its unit tests. Here is an example: Cheesecake.sh.

2. You do not have to hardcode the path to the newly built Python binary in your run_tests.py or your shell scripts. You can simply retrieve the path to the binary via sys.executable. This run_tests.py script sets an environment variable called PYTHON via a call to
os.putenv('PYTHON', sys.executable)
Then the variable is used as $PYTHON in the shell scripts invoked by run_tests.py (thanks to Elliot Murphy and Seo Sanghyeon for coming up with this solution.)

Cheesecake case study: Cleaning up PyBlosxom

Will Guaraldi wrote an article on "Cleaning up PyBlosxom using Cheesecake". Cool stuff!

Will, I hope we meet at the next PyCon, I owe you a case of your favorite beer :-)

Wednesday, August 16, 2006

Pybots -- Python Community Buildbots

The idea behind the Pybots project (short for "Python Community Buildbots") is to allow people to run automated tests for their Python projects, while using Python binaries built from the very latest source code from the Python subversion repository.

The idea originated from Glyph, of Twisted fame. He sent out a message to the python-dev mailing list (thanks to John J. Lee for bringing this message to my attention), in which he said:

"I would like to propose, although I certainly don't have time to implement, a program by which Python-using projects could contribute buildslaves which would run their projects' tests with the latest Python trunk. This would provide two useful incentives: Python code would gain a reputation as generally well-tested (since there is a direct incentive to write tests for your project: get notified when core python changes might break it), and the core developers would have instant feedback when a "small" change breaks more code than it was expected to."

Well, Neal Norwitz made this happen by setting up a buildmaster process on one of the servers maintained by the PSF. He graciously allowed me to maintain this buildmaster, and I already added a buildslave which runs the Twisted unit tests (in honor of Glyph, who was the originator
of this idea) every time a check-in is made in the Python trunk. You can see the buildmaster's status page here.

Note that some of the Twisted unit tests sometimes fail for various reasons. Most of these reasons are environment-related -- for example the user that the buildbot slave runs as used to not have a login shell in /etc/passwd, and thus a specific test which was trying to run the login shell as a child process was failing. Not necessarily a Twisted bug, but still something that's nice to catch.

And this brings me to a point I want to make about running your project's automated tests in buildbot: I can almost guarantee that you will find many environment-specific issues that would otherwise remain dormant, since most likely the way you're usually running your tests is in a controlled environment that you had set up carefully some time ago. There's nothing like running the same tests under a different user's account and environment.

Of course, if you've never run your tests under a continuous integration process such as buildbot, you'll also be pleasantly -- or maybe not so pleasantly -- surprised at the amount of stuff that can get broken by one of your check-ins that you considered foolproof. This is because buildbot tirelessly checks out the very latest source code of your project, then runs your unit tests against that code. When you run your unit tests on your local machine, chances are you might not have synchronized your local copy with the repository.

This does assume that you have unit tests for your project, but since you have been reading this post this far, I assume you either do, or are interested in adding them. I strongly urge you to do so, and also to contribute to the Pybots project by setting up a buildslave for your project.

I'll post another entry with instructions on how to configure a buildslave so that it can be coordinated by the Pybots buildmaster. I also have a mailing list here (thanks, Titus!) for people who are interested in this project. Please send a message there, and I'll respond to you.

The buildmaster is currently running the build steps every time a check-in is made to the Python trunk, and to the 2.4 branch. In the near future, there will be a 2.5 branch, and the trunk will be used for 2.6 check-ins. I'll modify the buildmaster configuration to account for this.

Update: The buildmaster is now aware of changes in both the trunk and the newly created release25-maint branch. You can watch the HTML status page for all builders, or for the trunk builders.

BTW, if you need instructions on setting up buildbot, you can find some here, here, here and here.

Dave Nicolette's recommended reading list on agile development

Worth perusing. I always enjoy Dave's blog posts on agile development, so I trust his taste :-)

Tuesday, August 15, 2006

Cheesecake 0.6 released

Thanks to Michał's hard work, we released Cheesecake 0.6 today. The easiest way to install it is via easy_install: sudo easy_install Cheesecake

Update: Please report bugs to the cheesecake-users mailing list.

Here's what you get if you run cheesecake_index on Cheesecake itself:

$ cheesecake_index -n cheesecake
py_pi_download ......................... 50 (downloaded package cheesecake-0.6.tar.gz directly from the Cheese Shop)
unpack ................................. 25 (package unpacked successfully)
unpack_dir ............................. 15 (unpack directory is cheesecake-0.6 as expected)
setup.py ............................... 25 (setup.py found)
install ................................ 50 (package installed in /tmp/cheesecakeNyfM4f/tmp_install_cheesecake-0.6)
generated_files ........................ 0 (0 .pyc and 0 .pyo files found)
---------------------------------------------
INSTALLABILITY INDEX (ABSOLUTE) ........ 165
INSTALLABILITY INDEX (RELATIVE) ........ 100 (165 out of a maximum of 165 points is 100%)

required_files ......................... 180 (6 files and 2 required directories found)
docstrings ............................. 63 (found 17/27=62.96% objects with docstrings)
formatted_docstrings ................... 0 (found 2/27=7.41% objects with formatted docstrings)
---------------------------------------------
DOCUMENTATION INDEX (ABSOLUTE) ......... 243
DOCUMENTATION INDEX (RELATIVE) ......... 70 (243 out of a maximum of 350 points is 70%)

pylint ................................. 36 (pylint score was 7.01 out of 10)
unit_tested ............................ 30 (has unit tests)
---------------------------------------------
CODE KWALITEE INDEX (ABSOLUTE) ......... 66
CODE KWALITEE INDEX (RELATIVE) ......... 83 (66 out of a maximum of 80 points is 83%)


=============================================
OVERALL CHEESECAKE INDEX (ABSOLUTE) .... 474
OVERALL CHEESECAKE INDEX (RELATIVE) .... 79 (474 out of a maximum of 595 points is 79%)

For a detailed explanation of how the scoring is done, see the main Wiki page, and/or run cheesecake_index in --verbose mode.

Stay tuned for a cool case study on improving a package using the Cheesecake guidelines, courtesy of Will Guaraldi.

Tuesday, August 01, 2006

A couple of Apache performance tips

I had to troubleshoot an Apache installation recently. Apache 2.0 was running on several Linux boxes behind a load balancer. If you ran top on each box, the CPU was mostly idle, there was plenty of memory available, and yet Apache seemed sluggish. Here are a couple of things I did to speed things up.

1. Disable RedirectMatch directives temporarily

All the Apache servers had directives such as:

RedirectMatch /abc/xyz/data http://admin.mysite.com/abc/xyz/data

This was done so administrators who visited a special URL would be redirected to a special-purpose admin server. Since the servers were pretty much serving static pages, and they were under considerable load due to a special event, I disabled the RedirectMatch directives temporarily, for the duration of the event. Result? Apache was a lot faster.

2. Increase MaxClients and ServerLimit

This is a well-known Apache performance optimization tip. Its effect is to increase the number of httpd processes available to service the HTTP requests.

However, when I tried to increase MaxClients over 256 in the prefork.c directives and I restarted Apache, I got a message such as:

WARNING: MaxClients of 1000 exceeds ServerLimit value of 256 servers, lowering MaxClients to 256. To increase, please see the ServerLimit directive.

There is no ServerLimit entry by default in httpd.conf, so I proceeded to add one just below the MaxClients entry. I restarted httpd, and I still got the message above. The 2 entries I had in httpd.conf in the IfModule prefork.c section were:

MaxClients 1000
ServerLimit 1000

At this point I resorted to all kinds of Google searches in order to find out how to get past this issue, only to notice after a few minutes that the number of httpd processes HAD been increased to well over the default of 256!

UPDATE 03/06/09: It turns out that the new MaxClient and ServerLimit values take effect only if you stop httpd then start it back again. Just doing a restart doesn't do the trick...


So, lesson learned? Always double-check your work and, most importantly, know when to ignore warnings :-)

Now I have a procedure for tuning the number of httpd processes on a given box:

1. Start small, with the default MaxClients (150).
2. If Apache seems sluggish, start increasing both MaxClients and ServerLimit; restart httpd every time you do this.
3. Monitor the number of httpd processes; you can use something like:

ps -def | grep httpd | grep -v grep | wc -l

If the number of httpd processes becomes equal to the MaxClients limit you specified in httpd.conf, check your CPU and memory (via top or vmstat). If the system is not yet overloaded, go to step 2. If the system is overloaded, it's time to put another server in the server farm behind the load balancer.

That's it for now. There are many other Apache performance tuning tips that you can read from the official Apache documentation here.

Tuesday, July 25, 2006

Porting to the Linux Standard Base

IBM developerWorks offers a tutorial (free registration required) on "Porting to the Linux Standard Base". Excerpt from the introduction:

"Because Linux® is an open operating system, you can configure and assemble it to suit specialized purposes. However, while variety and choice are beneficial for users, heterogeneity can vex software developers who must build and support packages on a multitude of similar but subtly different platforms. Fortunately, if an application conforms to the Linux Standard Base (LSB), and a flavor of Linux is LSB compliant, the application is guaranteed to run. Discover the LSB, and learn how to port your code to the standard."

Adhering to standards -- I'm all about that, although I've been called names before for showing enthusiasm for Python project layout standardization...Anyway, I'm glad to see the Linux community pushing the LSB, since this will benefit both distribution creators and application writers.

GK-H on "Myths, lies and truths about the Linux kernel"

"Myths, lies and truths about the Linux kernel" is the title of Greg Koah-Hartman's closing keynote at OLS 2006. Fascinating read, especially when Greg talks about the apparently chaotic Linux kernel development process, which turns out to be amazingly flexible and evolutionary. I was also impressed by the arguments for open-source drivers that are maintained and modified at the same time with the kernel -- this make a stable internal kernel API unnecessary, and allows the kernel to evolve.

Titus sent me the link to the keynote, and he also underlined this paragraph related to testing:

"Now, this is true, it would be great to have a simple set of tests that everyone could run for every release to ensure that nothing was broken and that everything's just right. But unfortunately, we don't have such a test suite just yet. The only set of real tests we have, is for everyone to run the kernel on their machines, and to let us know if it works for them."

It's pretty amazing to me that the Linux kernel manages to be that stable without a regression test suite. Imagine how much better it would be with such a regression suite. Clearly, a community project waiting to happen.

In the mean time, back to my pybots.... also waiting for that to happen -- but at least it's a bit more realistic from my perspective, and I hope to be able to give you more details as the PSF is working on getting a server to configure the buildmaster on.

Wednesday, July 19, 2006

Marick on refactoring

Brian Marick just posted his definition of refactoring: "A refactoring is a test-preserving transformation." It very succintly expresses the critical need for tests. If you don't have tests, how do you know refactoring preserves anything? Great stuff as usual from Brian M.

Monday, July 10, 2006

Emmental

The fourth Cheesecake/SoC iteration has been completed -- code name emmental. Here are some stories that Michał implemented in this iteration:
  • Implement --static command line flag, which makes Cheesecake do only static tests, that don't execute any of package code. This is useful for example for the Cheesecake/PyPI integration, where we'll only look at 'static' indexes such as documentation and installability, as opposed to 'dynamic' indexes which involve code execution, such as unit test coverage;
    • Make execution of some parts of code depending on "static" flag.
    • Implement static "profile" - a subset of all indices that scores only statically.
    • Technical detail related to this story: a nice touch from Michał was the implementation of index dependencies via this changeset
  • Implement --lite command line flag, which makes Cheesecake ignore time-consuming tests, such as the pylint test;
  • Static unit test analysis (lots of work still to be done here);
    • Use Michael Hudson's AST-based pydoctor package
    • Compute proportion of number of code/functions and tests.

Monday, June 26, 2006

OpenWengo Code Camp

Found this via the buildbot-devel mailing list: OpenWengo Code Camp. It seems similar in philosophy and goals to the Google Summer of Code. Excerpt from the home page:

"OpenWengo Code Camp is a friendly, challenging and mind-stimulating contest aimed at pushing open source software projects forward.
Students apply for proposed software development subjects for which they have a particular interest in. These subject proposals describe ways to bring enhancements to existing or new FOSS projects, generally by writing source code.

If their application is accepted, they get the chance to be mentored by open source software contributors to work during 2 months on the subject for which they applied. At the end of summer, mentors give their appreciation: if goals were successfully reached, students get 3500 euros of cash.

Mentors get 500 euros of cash if they played their role which consist mainly in helping students to complete their work successfully and evaluating their work at intermediate and final stages."

Sounds pretty reasonable to me, and the cash is not bad either :-)

Looks like one of the proposals involves building a SQL backend for buildbot -- hopefully the project will go through.

Devon

The 3rd milestone for the Cheesecake/SoC project has been completed -- code name devon. This iteration had 3 stories:

1. Create functional tests that actually execute cheesecake_index script. Check that Cheesecake is:
  • properly cleaning up
  • leaving log file when package is broken and is removing it otherwise
  • computing score properly
  • handling its command line options properly

2. Write script that will automatically download and score all packages from PyPI.
  • Each package should have its score and complete Cheesecake output logged.
  • Gather time statistics for each package.
  • Make a summary after scoring all packages:
    • number of packages for which Cheesecake raised an exception
    • manually check first/last 10 packages and think about improving scoring techniques
3. Add support for egg packages
  • Refactor supported packages interface
  • Add support for installing eggs via setuptools easy_install
As far as story #2 is concerned, Michał and I discussed some modifications and tweaks we need to do to the scoring algorithms so that more packages get higher scores. Here are some ideas that we already implemented:
  • don't decrease installability score if a package is not hosted on PyPI (the package still needs to have a valid download link on its PyPI page);
  • split required files and directories into 3 categories: high, medium, and low importance, each category getting a score of 30, 20, and 10 points respectively;
  • here is the current classification, where Doc means the file can also have a 'txt' or 'html' extension, and OneOf means the score is given if any one of the files/directories in the specified list is found:
cheese_files = {
Doc('readme'): 30,
OneOf(Doc('license'), Doc('copying')): 30,
OneOf(Doc('announce'), Doc('changelog')): 20,
Doc('install'): 20,
Doc('authors'): 10,
Doc('faq'): 10,
Doc('news'): 10,
Doc('thanks'): 10,
Doc('todo'): 10,
}
cheese_dirs = {
OneOf('doc', 'docs'): 30,
OneOf('test', 'tests'): 30,
'demo': 10,
OneOf('example', 'examples'): 10,
}

We're getting ready to release cheesecake out in the wild pretty soon, I'd say in a couple of weeks -- so stay tuned!

We've also seen some activity on the cheesecake-users and cheesecake-dev mailing lists, and as always we encourage people interested in this project to send us feedback/suggestions/criticisms. We've been known to always take constructive criticism into account :-)

Update

Read also Michał's post on devon.

Sunday, June 11, 2006

Camembert

The second week of the Cheesecake/SoC project has ended, and all the stories have been completed. We chose the name camembert for this iteration. It included some very tasty (or should I say tasteful) refactoring from Michał, who sprinkled some magic pixie dust in the form of metaclasses and __getitem__ wizardry. It also included some development environment-related tasks, all of them executed via buildbot: automatically generating epydoc documentation and publishing it, running coverage numbers and publishing them, and converting the reST-based README file into Trac Wiki format. This last task had as a side-effect the creation of a little tool that Michał called rest2trac, which will be made available in the near future. Currently it does the conversions that we need for the markup we use in the README file.

All in all, another productive week, and lots of good work from Michał. Check out his Mousebender blog for more information.

Friday, June 09, 2006

Xen installation and configuration

Courtesy of my co-worker Henry Wong, here's a guide on installing and configuring Xen on an RHEL4 machine.

Introduction

Xen is a set of kernel extensions that allow for paravirtualization of operating systems that support these kernel extensions, allowing for near-native performance for the guest operating systems. These paravirtualized systems require a compatible kernel to be installed for it to be aware of the underlying Xen host. The Xen host itself needs to be modified in order to be able to host these systems. More information can be found at the Xen website.

Sometime in the future, XenSource will release a stable version that supports the installation of unmodified guest machine on top of the Xen host. This itself requires that the host machine hardware have some sort of virtualization technology integrated into the processor. Both Intel and AMD have their own versions of virtualization technology, VT for short, to meet this new reqirement. To distinguish between the two competing technologies, we will refer to Intel's VT as its codename, Vanderpool, and AMD's VT as Pacifica.


Installation

Before starting, it is highly recommended that you visit the Xen Documentation site. This has a more general overview of what is involved with the setup, as well as some other additional information.

Terminology

  • domain 0 (dom0): In terms of Xen, this is the host domain that hosts all of the guest machines. It allows for the creation and destruction of virtual machines through the use of Python-based configuration files that has information on how the machine is to be constructed. It also allows for the management of any resources that is taken up by the guest domains, i.e. networking, memory, physical space, etc.

  • domain U (domU): In terms of Xen, this is the guest domain, or the unpriviledged domain. The guest domain has resources assigned to it from the host domain, along with any limits that are set by the host domain. None of the physical hardware is available directly to the guest domain, instead the guest domain must go through the host interface to access the hardware.

  • hypervisor: Xen itself is a hypervisor, or in other words, something that is capable of running multiple operating systems. A more general definition is available here.

Prerequisites

Xen Hypervisor Requirements
  • A preexisting Linux installation, preferably something running 2.6. In this case, we'll be running with Redhat Enterprise Linux 4 Enterprise Server Update 3.

  • At least 1GB or more of RAM

  • 40GB+ disk space available

  • (OPTIONAL) Multiple CPU's. Hyperthreading doesn't count in this case. The more, the better, since Xen 3.0 is capable of virtualized SMP for the guest operating system.

Guest Domain Requirements

  • A preexisting Linux installation, preferably something running either the same kernel version as the host-to-be or newer. More on this later in the page.

  • Some storage for the guest domain. An LVM-based partitioning scheme would be ideal, but you can use a file to back the storage for the machine.

Xen Hypervisor Installation Procedure

  1. Obtain the installation tarball from XenSource Download Page. In this case, grab the one for RHEL4.

  2. Extract the tarball to a directory with sufficient space and follow the installation instructions that are provided by XenSource. For RHEL4, it is recommended that you force the upgrade of glibc and the xen-kernel RPMs. This will be explained in detail further in the page.

  3. Append the following to the grub.conf/menu.lst configuration file for the GRUB bootloader:

    title Red Hat Enterprise Linux ES-xen (2.6.16-xen3_86)
    root (hd0,0)
    kernel /xen-3.0.gz dom0_mem=192M
    module /vmlinuz-2.6-xen root=/dev/VolGroup00/LogVol00 ro console=tty0
    module /initrd-2.6-xen.img

    This might change depending on the version that is installed, but for the most part, using just the major versions should work. Details about the parameters will be explained later in the page.

  4. Reboot the machine with the new kernel.

The machine should now be running the Xen kernel

Guest Domain Storage Creation Procedure

LVM Backed Storage

By default, RHEL4 (and basically any new Linux distribution that uses a 2.6 kernel by default) uses the LVM (Logical Volume Manager) in order to keep track of system partitions in a logical fashion. There are two important things about LVM, the logical volume and the volume group. The volume group consists of several physical disks that are grouped together during creation, with each volume group having a unique identifier. Logical volumes are then created on these volume groups, and can be given a unique name. These logical volumes are able to grab a pool of available space on the volume group, with any specified size, properties, etc. If you wish to learn more about LVM, a visit to the LVM HOWTO on the Linux Documentation Project site is recommended.

Physical Partition Backed Storage

Far easier to create than an LVM, but with a little less flexibility, the physical partition backed storage for a guest machine just uses a system partition to store the data of the virtual machine. This partition needs to be formatted to a filesystem that is supported by the host, if you are to use the paravirtualization approach for domain creation.

File-Backed Storage

By far the easiest way to get a guest domain up and running, a file-backed store for the guest allows you to put the file anywhere where there is space. You wouldn't have to give up any extra partitions in order to create the virtual machine. But, this incurs a performance penalty.

Guest Domain Installation Procedure

  1. Create an image tarball from the preexisting Linux installation for the guest. Use tar along these lines:

    tar --exclude=/ --exclude=/sys/* --exclude=/tmp/* --exclude=/dev/* --exclude=/proc/* -czpvf  /

    Note that the excludes are before rather than after the short flags. This is because the -f short option is positional, and thus it needs a name immediately after the option.

  2. Move the tarball over to the Xen hypervisor machine.

  3. Mount the desired location of the guest storage on the hypervisor.

  4. Unpack the tarball into the guest storage partition.

  5. Copy the modules for the Xen kernel into the guest's /lib/modules directory. You can use the following command to copy the modules directory, replacing with the guest storage mount point:

    $ cp -r /lib/modules/`uname -r`/ /lib/modules/
  6. Move the /lib/tls directory to /lib/tls.disabled for the guest. This operation is specific to Redhat-based systems. Due to the way that glibc is compiled, the guest operating system will incur a performance penalty if this is not done. Ignore this step for any non-Redhat systems.

Initial setup of the guest is completed.


Running With Xen

Creating and starting a guest domain

  1. Create a guest configuration file under /etc/xen. Use the following example as a guideline:

    kernel = "/boot/vmlinuz-2.6-xen"                # The kernel to be used to boot the domU
    ramdisk = "/boot/initrd-2.6.16-xenU.img" # Need the initrd, since most of these systems run udev

    memory = 256 # Base memory allocation
    name = "xmvm1" # Machine name
    cpus = "" # Specific CPU's to assign the vm, leave blank
    vcpus = 1 # Number of available CPU's to the system
    vif = [ '' ] # Defines the virtual network interface

    # LVM-based storage
    disk = [ 'phy:VolGroup01/xenvm1-root,hda1,w', # Guest storage device mapping to the virtual machine
    'phy:VolGroup01/xenvm1-swap,hda2,w' ]

    root = "/dev/hda1 ro" # Root partition kernel parameter
  2. Mount the guest storage partition and edit the /etc/fstab for the guest to reflect any changes made to the configuration file. Remove any extraneous mount points that won't be recognized by the guest when the system is started, otherwise the guest machine will not boot.

  3. Start the maching using the following command:

    $ xm create -c 

    This will create the machine and attach it to a virtual console. You can detach from the console using CTRL-].

Further setup is still required, but it is OS-specific. The network interfaces will need to be setup for the guest machine.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...