October 23, 2008

Cost of servers 20 years ago — request for help!

Posted in Software tagged , , , at 9:30 pm by mj

I’m putting together a presentation, and I need your help!

I’m looking for the cost and specs of typical “commodity” hardware 20 years ago, versus the cost and specs for typical “big guns” at the same time.

I’m also looking for database options other than the usual suspects (Oracle, DB2, Sybase) that may have been available at the time. Cost comparisons are ideal.

In other words… if you were transported back to 1988, and you had to support a large (for the time) data set, and still knew what you know now about scaling … would you have had any alternatives than (something akin to) Oracle on big iron?

Many thanks, and I’ll post a follow-up later.

Advertisements

September 6, 2008

Creating Database Sandboxes for Unit/Integration Tests

Posted in Development, Software tagged , , , at 9:45 am by mj

After Baron Schwartz’s recent hint at having solved unit testing database sandboxes at a previous employer, I got to thinking about the problem again.

To be clear: this is not really unit testing, but I’ve found integration tests at various levels are just as important as unit tests. So much so that I have taken to creating both test and integration source directories, and, whenever possible, requiring both suites to pass as part of the build process.

There are two suggestions I’ve seen for solving this problem, both of which are applicable for local in-memory databases as well.

First, starting with a completely empty database, populating it, and then tearing it down. Unfortunately, this is not only difficult, it’s time consuming. If you do this before each test, your tests will take hours to run. If you do this before the whole suite, your tests will not be isolated enough.

A previous co-worker had suggested an incremental approach. Start out with an empty data set, and let each test (perhaps through annotations) define which data must be fresh. I like that. It requires a lot of infrastructure and discipline. It could encourage simpler tests, although with simpler tests come more tests, thus more discipline.

The other approach I’ve seen suggested a couple of times now (including in a comment on Baron’s blog) is the use of a global transaction. Unfortunately, this does not work with all database engines. MySQL tends to be the real killjoy, because nested transactions are not supported and DDL statements are not transactional. Yeah, even in the transactional engines.

So, here’s what I’m thinking. If I were starting over with a new team, with minimal code already existing, I think I wouldn’t solve this problem from an engineering/code perspective. Instead, I’d solve it from an operational perspective (though it still requires application/test infrastructure changes).

Picture a central test database server with one pristine copy of the data, and thousands of database instances. The application (test) asks this server for an available database instance, uses it for a single test, and then moves on. The next test resets the application state, so it asks the server for another available database instance, and so on.

Meanwhile, there is a daemon on that server that is constantly checking each database instance. If the underlying data files do not match the pristine copy, they are restored and the instance is placed back into the available pool.

An instance is considered available for testing when (a) there are no threads connected to it, and (b) its data files match the pristine copy.

Tests that do not alter the underlying data files do not require restoration.

What about schema changes? Answer: you have to unit/integration test them too. When you’re ready to promote your code, you deploy to the pristine copy as part of the standard production release process. An interesting side effect of this is it will, in many cases, force other developers to merge production changes back into their private branches, because many of their tests will probably fail.

Contrary to Baron’s suggestion, in a properly designed system this does not require changes to production code. As long as you can inject a database connection pool into your application–and all quality code should have this property (cough, cough)–your test framework can inject a connection pool that interrogates the test server first.

And it can scale to multiple test database servers as your team and the number of tests grows.

I haven’t tried this, and I have too many deadlines (and too much legacy code that I’m still learning in my current team) to experiment with a real-world application.

But what do you think? What holes are there in this proposal?

Aside from violating the Engineering Aesthetic that the application should control the environment it needs for testing. Which is what I think has caused me the most problems over the years.