Testing Insights from B::DeparseTree

Testing Insights from B::DeparseTree

Rocky Bernstein’s recent post about B::DeparseTree contained several insights on testability and writing good tests. Here are my takeaways.

  • Testing a 6,000-line module is as difficult as it sounds. Long methods and large classes are code smells that made Martin Fowler’s Refactoring book. It’s often nigh impossible to unit-test a monolith. It might be time to do some separating of concerns. This will also—no surprise for testing aficionados like me—make it easier to maintain and extend the framework. Rocky notes some specifics under “The need for modularity” in his post, e.g., don’t repeat yourself, separate data from presentation, separate interface from implementation, and separate version-specific behaviors from version-agnostic behaviors.

  • Understanding a 500-line test is as difficult as it sounds. Long, complex tests made Gerard Meszaros’s list of test smells, in his book xUnit Test Patterns. And as we’ll see as we go on, we can’t ignore the stink just because we’re Test::More purists: his advice applies equally to class-based and procedural testing.[1]

  • Tests should not depend on each other. Rocky noted that “the slightest error” would generate thousands of lines of follow-on test failures. He didn’t explicitly say so, but I suspect that indicates that the first test failure left the fixture in an invalid state, causing all other tests to fail. This is what Meszaros calls “Data Sensitivity.” Each test needs to start with a clean fixture.

  • Each test should test one clear feature or unit of code. This is part of what causes “frail” tests, what Meszaros calls “Fragile Tests.” One big takeaway from Rocky’s post is the technique of round-trip testing. That is, you take the output of Deparse, which should be executable Perl, run it, and see if it generates the same behavior as the original snippet of code. He uses this with self-testing scripts so that the decompiled test code can verify itself. We can use a similar principle to validate HTML or other markup, for example, by asking what output we expect the markup to produce and rendering only those aspects of the generated markup.

  • Each test failure should clearly identify which feature or piece of code is broken. This is related to the bullet above but from the perspective of the programmer running the tests. If you can’t tell what went wrong, your tests are not helping you understand or write your code. Meszaros calls this “Obscure Test,” and one consequence is that you can’t use your tests as documentation. It can also result in buggy tests and high maintenance costs. This is why I prefer the four-phase test structure—setup, test, verify, and cleanup—with which you can see at a glance what each test does and why it failed. It’s also important to use complete assertion messages: each test failure should read like a good bug report, indicating what action we took, what we expected to happen, and what occurred instead.

All in all, an insightful read.

Peace, love, and may all your TAP output turn green…


[1] This distinction is a bit of a myth as well, which I touch on in Testing Strategies for Modern Perl. Test::Class and other xUnit-like frameworks don’t supplant procedural testing practices. Rather, they add new tools by which you can manage your tests. In particular, they can help you manage a collection of small, well-defined test methods. And yes, I can already hear you saying, “I do that with subtest $test_name => sub {}.” Yes, that’s exactly what I mean. I just happen to use sub test_name : Test() {} instead. Plus I can (a) run a named test method in isolation from the command line, (b) set up test fixtures before running each method in a module with a line of code, (c) automatically clean up after each test method (even failing ones) with a line of code, (d) inherit test fixtures across a whole set of test classes, and (e) abort a failed test method with a single line of code without affecting any other test methods.

Tim King is Lead Developer at The Perl Shop. Tim got his start writing real-time embedded software for high-speed centrifuges the 1980’s and went on to do embedded software for Kurzweil Music Systems and Avid Technology. He has been developing for the web since the web existed, and brings discipline and skills honed from embedded systems to enterprise software. His expertise is in designing for software quality, achieved through automated code testing, test-first development, and risk managed refactoring, all through an agile process. This approach naturally lends itself to working with legacy code, such as successfully and safely refactoring a 465-line legacy function used in a video streaming application into a structurally sound design. Or designing for maintainability, through cleanly layered architectures, like a web service that can handle multiple RPC protocols using a common controller and a thin view layer, that can easily be supplemented to handle additional protocols. Tim is skilled in Perl, JavaScript, and other programming languages, in Internet protocols, in SQL, and is familiar with the internals of a variety of open source applications. Tim also writes and performs music, and has authored and published a number of inspirational books.

One Reply to “Testing Insights from B::DeparseTree”

  1. Thanks for the kind words.

    Too often I feel like I am speaking in a vacuum with no one listening, I and would really prefer a dialog rather than a monologue.

    The rewrite continues to be a struggle and consumes a lot of time. But I can see a big payoffs, were this completed. Hopefully the barrier to entry to help out will be reduced. Also more precise better traceback information. Finally, the tree structure could be used as a stepping stone to form a real Perl AST, that was suggested 6 years ago in http://modernperlbooks.com/mt/2012/09/why-perl-5-needs-an-ast.html .

    Many programming languages provide an AST. Python and Javascript do to name just a couple.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.