This 30-day project explores the refactoring of a legacy system. The Everything Engine is an aging software project that powers Perl Monks, Everything 2, and a few other websites. It suffers from poor design and maintainiability. Learn what it’s like to look over the shoulder of an experienced developer as he refactors, redesigns, and updates the code.
Today’s task demonstrates how easy it is to port further tests. The new system works.
Day 15: A Port in an Hour
I learned a lesson the other day about porting nodes in order: follow the
inheritance path. The next likely candidate is
at the code reveals that it inherits from
node, so it’s the right
one to port next. At least, it’s not the wrong one to port next.
After migrating the old tests to the new
version and changing the test file so things will run, 424 tests run and 8
fail. (There are plenty of skips.) That’s not bad.
This node has no extra database table, so the
method can stay in the parent as it is.
The next test is for
insert(). There’s just a little code in
the node’s method, so the test can be short. (That’s nice — the test uses the
really old precursor to
so it’s ugly and difficult to maintain.) There are really only two things to
test here. First, it has to call
SUPER(). Then it has to do the
right thing based on the result.
It’s pretty easy to write new tests (and one shows that the
SUPER() call must pass the correct arguments). While running them,
the number 424 seemed high and I saw that several test descriptions repeated.
I’d mistyped the test runner in t/Node/dbtable.t. Correcting that ran
220 tests, with 16 failures.
Lots of these failures are because
insert() calls an extra
method on the database. That’s easy to fix… or so I thought. After tracing
errors for a couple of minutes, I realized that
node actually replaces the database member. The test in the
node test class didn’t account for this. (It didn’t matter until
With that fixed, 210 tests run and 24 fail. I think this is progress, though it seems backwards.
The failing test is
overriding it to add the mock on
createNodeTable(), it still
fails. Somehow, there’s another place where the database disappears.
Ahh, it’s because my mock to update the database made a condition that was
previously always false sometimes true. The first call to
getNode() should return a false value. The second should return a
datastructure with the database connection in it. With that fixed, 210 tests
run and 23 fail. That’s a lot of work to get one test passing, but it’s worth
The good news is that that work covers 67% of this test file.
Next up is
nuke(). This test will look very similar to that for
insert(), as it’s basically the same.
Copy and paste to the rescue. (Don’t worry; there’s a refactor here shortly
too.) Again, there’s a bug in the
SUPER() call in the node. With
that bug fixed, 191 tests run and only 2 fail. Good.
Is it bad that the test count is going down? I don’t think so necessarily. If we make sure to run the entire test suite, we’ll test all of the pieces of nodes in isolation, and that’s not bad. Yes, the behavior of a parent can change, and that’s bad if the children don’t notice it (and the tests do assume that they will), but the right approach is not to fix this testing method. It’s to run these tests against a testing database and check the data there. I don’t want to do that until I have better test coverage of all of the nodes where possible, so taking a bit of a shortcut for now isn’t awful. It just means I have to be careful.
The last test to port is for
those are the only failing tests. I think I can call the parent test method and
have that work just fine.
196 tests run and 4 fail. One of them is a logging failure.
Looking at the implementation of the method, it completely overrides the parent, so the test method should do the same. That might fix the problems.
Nope. Oh well. Oh wait, the problem is that I named the test method
that fixed, 190 tests run and 2 fail. Better.
I’m still not sure where the errors are going. Annotating the mock error
logger (the one that stores them in
reveal anything. Is the test even reaching that point?
No! It’s not. The old
AUTOLOAD()-based scheme of calling node
methods is the culprit. It’s a quick change to set the actual object’s
parameters instead of passing in a hash reference.
In doing that, I found an untested branch and added a test. Now 193 tests run and all of them pass. Did I break anything else? Nope. 2506 tests run in 54 files and everything passes. This is checkin #848.
I have time to do a bit more testing. Would something like
htmlpage be fast? It’s just
restrictTables and it inherits from
node. I’ll try
After adding the boilerplate test stuff, 212 tests run and 3 fail. That’s great!
The first test to port is
test_dbtables(). I know how to do
this. 212 tests run and 2 fail.
The next test to port is also the final test –
test_restrict_title(). One failure is that the error message for
invalid names is slightly different. I can fix that in the
Now only one test fails. That’s because
htmlcode titles are
much more restrictive than normal
That implies that this is another case of test method overriding. That’s fine.
With that in place, 210 tests run and all of them pass. How’s the rest of the world?
Hm, there are some test skips I didn’t notice before. I’ll have to check
them out. One is because I miscounted the tests in
restrictTitle() tests. Fixed. (From 193 tests to 191.)
The other skips are in the
user tests in the
test_xml_tag() method. Oh right, that’s because I updated the
setting tests. Deleting the unnecessary test methods there and
setting fixes those.
Now everything looks good, with 2725 tests running. This is a great way to raise the numbers!
I have several checkins to do now. First, the
user test changes
are #849. The test counter changes for
dbtable are #850. Then the
htmlcode tests are #851. That’s the end of a day and
(I like how quickly the smaller tests port; this will continue!)