TDD Anti-Patterns

Recently I began to write a paper on TDD Anti-Patterns, and decided to first quickly jot down some of the most common ones that others or myself have encountered “in the wild.” I then posted what I had on the testdrivendevelopment on yahoogroups mailing list, and got an excellent bit of feedback!

As a side note, keep in mind this is merely a catalog at the moment, hopefully to be expanded in full down the road.

TDD Anti-Pattern Catalogue

The Liar
An entire unit test that passes all of the test cases it has and appears valid, but upon closer inspection it is discovered that it doesn’t really test the intended target at all.
Excessive Setup
A test that requires a lot of work setting up in order to even begin testing. Sometimes several hundred lines of code is used to setup the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on.
The Giant
A unit test that, although it is validly testing the object under test, can span thousands of lines and contain many many test cases. This can be an indicator that the system under tests is a God Object
The Mockery
Sometimes mocking can be good, and handy. But sometimes developers can lose themselves and in their effort to mock out what isn’t being tested. In this case, a unit test contains so many mocks, stubs, and/or fakes that the system under test isn’t even being tested at all, instead data returned from mocks is what is being tested.
The Inspector
A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.
Generous Leftovers [4]
An instance where one unit test creates data that is persisted somewhere, and another test reuses the data for its own devious purposes. If the “generator” is ran afterward, or not at all, the test using that data will outright fail.
The Local Hero [1]
A test case that is dependent on something specific to the development environment it was written on in order to run. The result is the test passes on development boxes, but fails when someone attempts to run it elsewhere.
The Nitpicker [1]
A unit test which compares a complete output when it’s really only interested in small parts of it, so the test has to continually be kept in line with otherwise unimportant details. Endemic in web application testing.
The Secret Catcher[1]
A test that at first glance appears to be doing no testing due to the absence of assertions, but as they say, “the devil’s in the details.” The test is really relying on an exception to be thrown when a mishap occurs, and is expecting the testing framework to capture the exception and report it to the user as a failure.
The Dodger [1]
A unit test which has lots of tests for minor (and presumably easy to test) side effects, but never tests the core desired behavior. Sometimes you may find this in database access related tests, where a method is called, then the test selects from the database and runs assertions against the result.
The Loudmouth [1]
A unit test (or test suite) that clutters up the console with diagnostic messages, logging messages, and other miscellaneous chatter, even when tests are passing. Sometimes during test creation there was a desire to manually see output, but even though it’s no longer needed, it was left behind.
The Greedy Catcher [1]
A unit test which catches exceptions and swallows the stack trace, sometimes replacing it with a less informative failure message, but sometimes even just logging (c.f. Loudmouth) and letting the test pass.
The Sequencer [1]
A unit test that depends on items in an unordered list appearing in the same order during assertions.
Hidden Dependency
A close cousin of The Local Hero, a unit test that requires some existing data to have been populated somewhere before the test runs. If that data wasn’t populated, the test will fail and leave little indication to the developer what it wanted, or why… forcing them to dig through acres of code to find out where the data it was using was supposed to come from.
The Enumerator [2]
A unit test with each test case method name is only an enumeration, i.e. test1, test2, test3. As a result, the intention of the test case is unclear, and the only way to be sure is to read the test case code and pray for clarity.
The Stranger
A test case that doesn’t even belong in the unit test it is part of. it’s really testing a separate object, most likely an object that is used by the object under test, but the test case has gone and tested that object directly without relying on the output from the object under test making use of that object for its own behavior. Also known as TheDistantRelative[5] .
The Operating System Evangelist [3]
A unit test that relies on a specific operation system environment to be in place in order to work. A good example would be a test case that uses the newline sequence for windows in an assertion, only to break when ran on Linux.
Success Against All Odds [2]
A test that was written pass first rather than fail first. As an unfortunate side effect, the test case happens to always pass even though the test should fail.
The Free Ride
Rather than write a new test case method to test another feature or functionality, a new assertion rides along in an existing test case.
The One
A combination of several patterns, particularly TheFreeRide and TheGiant, a unit test that contains only one test method which tests the entire set of functionality an object has. A common indicator is that the test method is often the same as the unit test name, and contains multiple lines of setup and assertions.
The Peeping Tom
A test that, due to shared resources, can see the result data of another test, and may cause the test to fail even though the system under test is perfectly valid. This has been seen commonly in fitnesse, where the use of static member variables to hold collections aren’t properly cleaned after test execution, often popping up unexpectedly in other test runs. Also known as TheUninvitedGuests
The Slow Poke
A unit test that runs incredibly slow. When developers kick it off, they have time to go to the bathroom, grab a smoke, or worse, kick the test off before they go home at the end of the day.

Well, that wraps it up for now. I’d like it if people could vote on the top ten anti-patterns listed here so I can go about writing up more detailed descriptions, examples, symptoms, and (hopefully) refactorings to move away from the anti-pattern listed.

If I left anyone out, please let me know, and any additional patterns are highly welcomed!

  • [1]Frank Carver
  • [2]Tim Ottinger
  • [3]Cory Foy
  • [4]Joakim Ohlrogge
  • [5]Kelly Anderson

You can leave a response, or trackback from your own site.

Facebook comments:

123 Responses to “TDD Anti-Patterns”

  1. [...] We all know the idea of refactoring to patterns is good. In fact that’s generally how I refactor. I see a code smell (say you have a loop that is doing too many things) and refactor to a pattern like Split Loop in order to fix the smell. How about refactoring to Anti-patterns? Or just plain detecting Anti-patterns in unit tests. That’s what James Carr is up to here. [...]

  2. Frank Carver says:

    I’ve been mulling over this list and thought of another few I’ve faced
    in the past. It took me a while to think up appropriate names, though.
    Somehow these all seem to have an avian theme :)

    * The Cuckoo – A unit test which sits in a test case with
    several others, and enjoys the same (potentially lengthy) setup
    process as the other tests in the test case, but then discards the
    some or all of the artefacts from the setup and creates its own.

    One of a cluster of test case structure anti-patterns along with
    Stranger/Distant Relative, Mother Hen and probably Long Conversation

    This is really a sort of “test case envy” and would probably benefit
    from refactoring into a separate test case. In more subtle cases it
    might also be asking for a shared base class for the two test cases.

    If left unchecked, this can lead to test cases where hardly any tests
    make use of the common setup process, but its done for every test
    anyway.

    One thing to avoid is the temptation to just shove *all* the
    initialization into the common setup, which just confuses and slows
    down the whole process. This can lead to a Mother Hen (below).

    * The Mother Hen – A common setup which does far more than the actual
    test cases need. For example creating all sorts of complex data structures
    populated with apparently important and unique values when the tests
    only assert for presence or absence of something.

    This can be a sign that the setup was written before the tests
    themselves, which is a subtle bit of up-front design easily hidden in
    an otherwise fully TDD process.

    One of a cluster of test case structure anti-patterns along with
    Stranger/Distant Relative, Cuckoo and probably Long Conversation

    * The Wild Goose – A unit test which, even though it initially appears
    simple, seems to require an ever increasing amount of the application
    to be created or initialised in order to make it pass.

    This sort of test can take up a disproportionate amount of developer
    time, and lose the benefits of the fast turn-round TDD cycle. In the
    worst case, TDD is effectively abandoned while the all or most of the
    application is developed using an untested up-front process to make a
    single test pass.

    This is common among developers new to TDD who are not yet comfortable
    with “faking” a response before moving the focus of design to a lower
    level.

    * The Homing Pigeon – A unit test which (typically because it requires
    non-public access to a class under test) needs to be created and run
    at a particular place in a package hierarchy. This initially seems
    perfectly reasonable, but can unexpectedly fail if the class under
    test is moved to a new location or (worse) split so that its behaviour
    goes to more than one such location.

    One of a cluster of implicit assumption anti-patterns along with Local
    Hero, Operating System Evangelist and Hidden Dependency

    It should probably be noted that this is still an active area of
    discussion, with many people arguing that it is a necessary or even
    benficial approach to design by allowing testing of non-public access.

    * The Dodo – A unit test which tests behavior no longer required in
    the application. Both the behaviour and the test should probably be
    deleted.

    * Love Birds – A test case which is so closely coupled with a
    particular class under test that it distracts from the need to test
    important interaction between classes. Commonly found in a test package
    structure created for Homing Pigeons where there is no obvious place
    to put tests for class interaction.

    I’m really enjoying this game, can you tell?

  3. chuck says:

    * Old Predictable – A test case that tests multithreaded conditions with the same thread ordering every time, or random data with the same random sample data (or seed) every time.

  4. James Carr says:

    Frank: What’s up with the bird themed names? Have you been watching National Geographic all day? ;)

    Chuck: Thanks for including a threaded related test. I have not had the experience of doing TDD with threaded applications much, but from what I have seen on some of the mailing list, it can be quite easy to write lousy tests for threaded applications.

    Thanks,
    James

  5. James Carr says:

    Hmm… here’s another that came to mind just now while looking at the unit tests for a certain project I just downloaded…

    The Web Surfer
    A unit test that requires a connection to internet with access to the outside world in order to run.

    Wow! Some these are really starting to get generalized!

  6. I’d like to see some solutions or advice about some of these, as well as just identifying them.

    The Inspector, for example – my solution is to make the class under test package private (but it implements a public interface) and test it that way.. although I find that a test that depends on breaking encapsulation is probably testing behaviour that is impossible via the public interface, so the test has little value. What’s your solution?

    The Loudmouth – I normally just run my tests and delete the logging code when they pass, but I can see the value in keeping it, and only showing it when the tests fail. With the standard exception-based test frameworks, I suppose this would be quite verbose. Thankfully I rolled my own test framework where you return SUCCESS or FAILURE (enum members – slightly clearer than true/false), so I don’t think this would be too verbose. Subjective, I know. A nice feature is that if the test throws an exception instead, it is not just a failure, but ‘blocked’. The test suite stops – of course that might not always be good, but it’s currently my preference.

    Frank Carver’s Wild Goose is surely pushing the word ‘unit’ somewhat. I have my doubts as to the usefulness of unit testing, as opposed to just automated testing. What is the largest thing that you can legitimately call a unit? Is it a method, an object, a package, an entire project?

    I tend to write tests first only when I expect buggy code. This is usually when I find the problem to be solved difficult to understand. I work on a network simulator, and sometimes my boss comes up with requirements that I’m not too sure about. He is accustomed to doing subnet calculations in his head, I’m not, so sometimes his requirements confuse me. Writing the test first, sometimes even in his presence, can help me there. Perhaps my approach is actually an antipattern – Selective Tester. What do you think?

    I also write tests when I find bugs. This is proving difficult with Swing-related code, especially as I try to run my tests in ‘headless’ mode. I’ve (temporarily) broken my application by wrapping JFrame and JDialog so that I can instantiate real versions in the real app and dummy versions in tests. I know that this is because of the behaviour-dependent design of Swing, but either way, it’s annoying.

    Cheers.

  7. I didn’t recognise greedy leftovers as something I said. I think The sloppy worker could have that effect but I can’t take credit for Gredy worker :)

    Here are the ones I came up with on the list.

    The overly stateful – A test that uses the filesystem unneccesarily.
    For instance storing a file and reding it with a FileReader when a
    StringReader would suffice to prove your point.

    Which leads me to:

    The sloppy worker – A test that creates persistent resources but
    doesn’t clean after itself.

    The mime: Developing the code first and then repeating what the code does with expectations mocks. This makes the code drive the tests rather than the other way around. Usually leads to excessive setup and poorly named tests that are hard to see what they do.

    Inversion of design aka Creative setup: This is closely related to
    James excessive setup anti-pattern and I’ve seen it with mocks.
    Instead of redesigning the subject under test so that it doesn’t
    require excessive setup; the testcase is redesigned so that it is
    easier to create a complex setup.

    This one is more about people behviour but can be a cause of interesting petters:

    The bounty hunter: Making sure to exercise all code in order to reach
    the required level of X% testcoverage. But not really testing
    anything.

  8. James Carr says:

    Joakim: true… even though I did draw some of it from what you mentioned about sloppy worker. Thought I’d at least give partial credit since you mentioned some of the elements of it on the list. ;)

    Another amusing one someone brought up that was summarized in one statement:

    Old and Busted: Test Driven Development.
    The new hotness: Test Driven Unit Testing.

    I have yet to see a unit test for a unit test yet, however. :(

  9. [...] James Carr » Blog Archive » TDD Anti-Patterns Posted by tzara Filed in blah blah [...]

  10. casper says:

    A very enlightening and amusing post :) Could you clarify ‘The Secret Catcher’ a bit? I’m not sure exactly what you mean by it, but certainly there are valid NUnit tests that have no Assert() but rather an [ExpectedException] attribute.

  11. [...] James Carr has an amusing and enlightening post on TDD Anti-Patterns. They encompass all the key points I’ve written about and add a few more as well. I’m not completely sure if I understand what he means by ‘The Secret Catcher’, because it’s perfectly normal (and valid) to write unit tests with NUnit that have no Assert, but rather an [ExpectedException] attribute. Posted: Tuesday, November 28, 2006 7:26 AM by casper [...]

  12. James Carr says:

    Casper,

    Yeah, there’s been a bit of debate over the Secret catcher on the TDD list… after all, it is valid if you are testing for an exception to be thrown. The Secret Catcher is meant to be the case where you do not expect the exception to be thrown, but rather just rely on the framework to fail when it encounters the exception.

    Essentially, in java code it might look something like this:

    public void testSomething(){
    foo.makeCallThatCanThrowTwentyDifferentExceptions();
    }

    I’m sure you get the picture. ;)

  13. Mike Chaliy says:

    Another anti-pattern with another crazy name :)

    The Initalization Hero

    Occured when every test method makes initialization of the target. Must be refactored in the factory method.

  14. casper says:

    James – thanks for clearing that up. I’m in total agreement. Again, very well-written post :)

  15. Srihari Y. says:

    Am currently still a novice with TDD but one i’ve usually encountered in my initial steps is this :

    - Write testcases which mock interfaces/classes and use “setters” to put the mocked objects to the “under test” class and have tests that test the functionality correctly.
    - But finally end up writing code in constructor of that “under test” class without tests or with tests which just test “initialisation” of the real world equivalents of the “mocked” interfaces/classes.

    So would this or some inherent assumption of mine (in above), qualify for a TDD antipattern? :)

    Have been actually wondering what would be the best way to avoid this though? any suggestions?

    If really there is something, then for naming.. well.. seems close to Mockery but actually bit different.. Cannot come up with any good names.. How about “Forget the constructor”? (ahh.. forget it.. sorry cant be creative enough! :) )

  16. [...] Як розробка, керована тестами, може піти хибним шляхом [...]

  17. [...] Some time ago I read a good article on some common TDD anti-patterns by James Carr (which lives right here.) Great, yeah, those are some anti-patterns. So what about patterns? [...]

  18. I am not sure if the “Giant setup” is really an anti-pattern. Some test cases do require a big setup routine because it requires you to set up a lot of components before doing anything.

    e.g. in a case management system
    You may have to create the person (there are many mandatory fields for the person to be valid e.g. name, birth date, gender)
    Then you have to create the address for the person
    Then you have to create the case with the person associated

    … at this point you can do operations to the case…

    As you can see the setup is quite big. The teardown is usually simple if we set up the unit test to rollback all the changes.

  19. [...] Link to James Carr » Blog Archive » TDD Anti-Patterns [...]

  20. James Car gathered an excellent collection of TDD anti patterns which is HAVE TO for every TDD oriented developer

    Bookmark it at once! :)

  21. Andy Maleh says:

    The Giant helps me often in detecting code smells.

    Very interesting anti-patterns. Love the naming. Thanks for cataloging them.

  22. [...] This was published a while ago, but it is an awesome read. Made me laugh, and I just read it for the first time last week!! [...]

  23. Jon Herron says:

    Excellent article. Unfortantually I have been bit by The Mockery one too many times, now at least I have a good name to refer to it by.

  24. I bumped into this article via JP Boodhoo’s blog the other day (I know the original article has been out in the wild for a while now).

    Anyhow, as I read through the article, realisation hit that I subconsciously follow a set of practices that help avoid the aforementioned anti-patterns ….

  25. Aaron Seet says:

    The worst anti-pattern of them all:

    The Void

    Not a single test to confirm the system works.

  26. The Mad Hatter’s Tea Party.
    This is one of those test cases that seems to test a whole party of objects without testing any specific one. This is often found in poorly designed systems that cannot use mocks or stubs and as a result end up testing the state and behaviour of every peripheral object in order to ensure the object under test is working correctly.

  27. James Carr says:

    Hi jupitermoonbeam,
    Yes… that’s one of the most annoying test smells I’ve seen.. you wind up with a lot of methods that test other objects that the object under test uses, and the result is quite often along the lines of “The Giant.”

    Thankfully the strategy for refactoring is simple… isolate methods that are testing other objects and extract them out to their own unit tests, and provide mocks/stubs for access to those objects during test execution.

    I’ve also found this to be a result of a code smell as well, particularly a God Object or one that has too many dependencies which could be isolated.

    Cheers,
    James

  28. [...] There’s one post and one post only on my blog that gets the largest amount of attention: the TDD Anti-Patterns entry. This was an entry to kind of precede an article I had prepared for the IEEE special issue on TDD, however that article won’t materialize (mainly due to last minute writing on my part). [...]

  29. LolitochkaBC says:

    Ааанукаребятки голосуем!!!

    Признавайтесь проказники и владельцы сайта blog.james-carr.org ))))

    ЧТО вы бдетее делать этим летом?!

  30. James Carr says:

    I ran your comment through babelfish, and I got:
    Of aaanukarebyatki we vote!!! Acknowledge mischief-makers the owners of site blog.james-carr.org)))) THAT you bdeteye to make with this summer?!

    Thanks… I guess. ;)

  31. Bill Smith says:

    How about the “Roll the Dice” test? Rather than actually figuring out where the software’s boundary conditions are, the test uses randomly generated data for parameters. Consequently, the test passes one time and fails the next.

  32. [...] Just read a couple of interesting posts. One provides a set of anti=patterns for TDD and the other looks at how best to Unit Test the db layer. [...]

  33. [...] Thanks to Frank Kelly for pointing out James Carr’s great list of testing anti-patterns – things to really avoid when writing automated test cases. This is definitely a page worth reading. [...]

  34. Good work James. I’ll be interested in seeing the final article and the pattern descriptions. You have an important set of principles captured here. As is typically the case at this stage of development, the current list may be lacking in orthogonality, but I’m sure that will get fleshed out.

    I picked up on a couple of your anti-patterns to talk about the value of tests as documentation at http://blog.softwarearchitecture.com/2007/06/test-driven-knowledge-management.html.

    Keep it up.

    Brian
    http://blog.softwarearchitecture.com

  35. [...] “Somehow I missed James Carr’s TDD Anti-Patterns late last year. I’ve perpetuated almost every one at least once. If you’re new to testing, browse the list, think about each entry, and watch for it in your own code.” [...]

  36. omo says:

    thanks for your work!
    i post a Japanese translation here:
    http://www.hyuki.com/yukiwiki/wiki.cgi?TddAntiPatterns

    if you have any trouble with this, please let me know.

  37. It would be good to see some kind of engine that could review a bunch of tests and classify them into some of these categories.

    “Hey Dude, 35% of your tests suck, here is why..”

  38. Bruno Orsier says:

    Hello James, thanks for publishing this list. Here is a french translation:

    http://bruno-orsier.developpez.com/anti-patterns/james-carr/

    Best regards

  39. D Lawson says:

    excellent list – with some amusing names :-)

  40. [...] James Carr » Blog Archive » TDD Anti-Patterns (tags: tdd testing programming pattern agile antipattern development architecture) [...]

  41. [...] Link to James Carr » Blog Archive » TDD Anti-Patterns [...]

  42. DavidDylan says:

    A very useful list – and one that I think would be useful to a much wider audience than just TDD practitioners. Do you think many of these anti-patterns are equally likely to show up in tests written after the code?

    I appreciate that, if you were originally targetting the IEEE Software TDD edition, it made sense to include TDD in the title. But now that your list seems to have a life of its own, perhaps you should consider renaming it: “unit test anti-patterns”.

    In fact I don’t even think they’re even limited to unit tests – maybe “xUnit test anti-patterns” to complement the recent book (which I haven’t read, yet) called xUnit test patterns would be the best name. I think “automated test patterns” might be going a bit too far, as that takes you too far into the domain of tester/scripters rather than developers writing tests.

  43. James Carr says:

    Indeed it has! It’s even been translated to 4 different languages now, so I’m starting to think maybe it’s time to dust this off and see what can be improved on it.

  44. Excellent list, because we must know “What is wrong?”, before starts writing unit testing.

    Best Regards

  45. [...] And that’s all. Everything came (and keeps coming) naturally: good basic principles like isolation, self-contained tests, dependency injection, and so on. There is no steep learning curve, there is no big initial investment (try buying a full set of snowboarding equipment to discover shortly after that you’re not going to use it…), there is no need for a professional instructor other than those good guys that already inspired you in the beginning. [...]

  46. Ade Miller says:

    Great list. This was just what I was looking for to describe some anti-patterns my team has been inadventently using.

    The Greedy Catcher – The XUnit.net framework supports much better exception catching than NUnit or MSTest.

    A variant of excessive setup and giant is complex base classes. Tests classes derive from a hierachy of base classes making the actual test very hard to read. Lots of people fall into this trap when first creating unit tests in an attempt to follow the DRY principle 100%. With tests sometimes readability is better than no duplication.

    Thanks,

    Ade

  47. [...] This post actually started out as my TDD and DSLs post but I got sidetracked into trying to describe an anti-pattern I’d noticed in a some of the unit tests we wrote as part of Service Factory: Modeling Edition. I came across James Carr’s excellent TDD anti-patterns post which got me thinking about this new variation on one of his anti-patterns.  [...]

  48. Ade Miller says:

    I blogged about another anti-pattern I’ve seen used a fair bit. If you’re still collecting here it is:

    http://www.ademiller.com/blogs/tech/2007/11/tdd-anti-pattern-inherited-test/

    Cheers,

    Ade

  49. [...] As a bonus, I’m finally writing tests before the code. Because, really, TDD just doesn’t work for me and I always end with crappy designs. And no, I’m not going to memorize TDD ANTI-PATTERNS (there are 22 of them!) to safely adopt the practice. I’d rather start with tests that don’t impact on design, and BDD allows me to do that. [...]

  50. [...] James Carr » Blog Archive » TDD Anti-Patterns (tags: patterns antipatterns testing) [...]

  51. [...] James Carr » Blog Archive » TDD Anti-Patterns (tags: tdd antipatterns testing Patterns programming Agile antipattern) [...]

  52. [...] James Carr fez um excelente (e divertido) trabalho ao levantar anti-padrões – práticas RUINS comumente encontradas nos projetos – em projetos TDD. [...]

  53. [...] James Carr’s TDD Anti-Patterns is a fun and enlightening read on of Test Driven Development. [...]

  54. [...] TDD Anti-Patterns (tags: tdd) [...]

  55. Jim says:

    Wonderful post! I’m glad it’s resurfaced for those of us who missed it the first time around. The Web Surfer and The Nitpicker are two I keep seeing repeatedly, as well as Excessive Setup which I’ve always thought of as Much Ado About Nothing.

    Thanks again!

  56. Tom Sugden says:

    Here is another simple anti-pattern that I think has been missed: The Denier or Contradiction. This is a test case where the message parameter of an assertion describes a success rather than a failure. If the test case fails, the test runner will display a contradictory failure message suggesting the test was successful.

  57. Matt says:

    Love the post..

    How about the ‘Stealth Bomber’. This is probably an extreme extension of the Hidden Dependency. I’ve seen a few tests that would simply ‘never’ fail when run interactively or debugged, or during a full test run in the day. When a supposed ‘fix’ was implemented it would even fool the developer into feeling pleased with themselves by passing for a day or two before bombing again in an overnight run. The logic wouldn’t show anything untoward, and it continued to bomb every few days without warning or trace of ‘why’. More than likely a group of no-no’s of execution sequence-dependent, date/time-dependent, database-dependent, context-dependent etc.

    There’s probably another variation – maybe the ‘Blue Moon’. A test that’s specifically dependent on the current date, and fails as a result of things like public holidays, leap years, weekends, 5 week months etc. This is again guilty of not setting up its own data.

  58. Kevin says:

    Locale Evangelist – We work with an offshore team, and sometimes we create tests that fail in one locale but pass in another. Our most frequent culprit is a timezone dependency.

  59. [...] “TDD Anti-Patterns” Software developer James Carr shares his own catalogue of TDD Anti-Pattern. It’s a good complement to Gerard Meszaros’ “xUnit Test Patterns: Refactoring Test Code” and pretty helpful in case you’re wondering whether or not your test harness is well written. [...]

  60. Nelz says:

    James, great post!

    Since you asked for votes, I would vote for The Inspector, and The Excessive Setup, as those are the anti-patterns I’m most guilty of.

    I am also interested in The Nitpicker and how to combat its use in web-app testing.

  61. Don Hosek says:

    For the nitpicker, the best approach is to use regular expression matching on the output that you’re inspecting. This of course can create some potential issues of its own, especially for the regexp newbie, but is probably the best of the possible solutions.

  62. PandaWood says:

    Rick Clarkson says:
    “Thankfully I rolled my own test framework where you return SUCCESS or FAILURE (enum members – slightly clearer than true/false)”

    That sounds scary, nothing is clearer than true/false!

  63. [...] TDD Anti-Patterns [via ISerializable] [...]

  64. Chuck: I’m not sure that “old predictable” is an anti-pattern. A test that is not predictable is, itself, an anti-pattern. If you need good coverage like that, you might consider generating several different tests that order threads differently or use different, predictable, random seeds.

    – Max

  65. [...] Anti-patrones TDD, de James Carr. [...]

  66. [...] James Carr lists a nice bunch of TDD anti patterns. via Jeremy . [...]

  67. [...] Anyone who has written a remotely complex web application applying TDD philosophies will find the following interesting: http://blog.james-carr.org/?p=44 [...]

  68. [...] Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it! Posted: Jul 07 2008, 03:04 PM by mikegeyser | with nocomments [...]

  69. Adrian Mouat says:

    Are these really “patterns”? I would argue they are closer to “code smells” as defined by Martin Fowler in “Refactoring”.

  70. [...] TDD Anti-Patterns Various antipatterns of the testing and test-driven world. Pretty sure I’ve encountered my fair share of them along the way–how about you? (tags: tdd testing antipatterns programming patterns development agile) [...]

  71. Are these really “patterns”? I would argue they are closer to “code smells” as defined by Martin Fowler in “Refactoring”.

    Yes, Indeed that was the samething i was thinking about.
    But i can be wrong also.

  72. John Cowan says:

    Sometimes the Giant has its uses.

    A little while ago, I wrote a test for a method that has to take a number of factors into account and return one of two results. For example, if the content is precious, always return A, but if not, then if the version is 3, always return B unless the user has requested A, etc. etc.

    The giant test I wrote does the full combinatorial explosion of assertEquals tests: version 1, 2, or 3; user requested A or B; content is precious or not; etc. There end up being 72 calls.

    Now I could refactor that to 72 tests, but I don’t think that would help anyone much. And doing “just enough” testing would have turned out to be not enough: just one of the 72 tests actually failed after I wrote the code, but it exposed a deep logic error. (Now I’ve had to change the code to satisfy new requirements, and half the tests are failing! Arrgh. But I’ll get it fixed eventually.)

  73. Graham Lenton says:

    How about the ‘Honest, Guv’ where the expected outcome is so entropic that the developer simply asserts true with a comment ‘this works, honestly..’

  74. [...] start off with something light. James Carr’s TDD Anti-Pattern Catalogue is a good, fun read that’s downright hilarious at times (but only because we remember writing [...]

  75. [...] LibraryScott Hanselman Dev&TestTDD Anti-PatternsEl Ave [...]

  76. Gishu Pillai says:

    I started a thread on stackoverflow.com, which is what I associate with voting nowadays :)
    Anyways you may want to take a look at an informal anti-pattern survey of sorts here
    http://stackoverflow.com/questions/333682/tdd-anti-patterns-catalogue#333697

    and of course.. uber post. Thanks for taking the time to put this down.

  77. [...] A catalog of common issues individuals and teams run into while doing TDD on their projects is well under way in this blog entry. [...]

  78. [...] es una traducción del artículo de James Carr, TDD Anti-Patterns. En dicho artículo algunos antipatrones han sido escritos por Frank Carver, Tim Ottinger, Cory [...]

  79. Carlos Ble says:

    Excelent article James!
    I’ve translated it to spanish: http://www.carlosble.com/?p=225

    Thanks

  80. [...] 9, 2009 suomala Törmäsin tähän hiljattain uudelleen. Kyseessä on James Carrin artikkeli TDD Anti-Patterns yksikkötestien sudenkuopista. Kevyttä luettavaa, mutta sisältää silti paljon hyviä [...]

  81. fact says:

    TDD is an anti-pattern.

  82. [...] James Carr » Blog Archive » TDD Anti-Patterns (tags: testing junit tdd) [...]

  83. [...] Carr가 TDD Anti-Patterns을 공개했다. TDD라고 되어있지만 사실은 테스트(코드작성) 안티패턴이라고 [...]

  84. [...] James Carr » Blog Archive » TDD Anti-Patterns (tags: testing tdd programming) [...]

  85. Peter Hickman says:

    Having just sat down and read through a bunch of unit tests that can take 15 minutes to run I would like to propose the “test everything” anti-pattern. In this anti-pattern the unit tests test the functionality of the programming language along with the application.

    For example I have just seen a load of tests to see if a subclass has it’s superclass as an ancestor and that the methods of the superclass are all inherited. The only way in which this can break is if the language changes!

    I fully expect to see tests to make sure that 1 + 1 still equals 2, you known, just incase!

  86. [...] Carr has a comprehensive list of TDD Anti-Patterns. Full disclosure: I am regularly guilty of The One (one test method for the entire class) and The [...]

  87. [...] на старую, но до сих пор актуальную статью посвященную антипаттернам в test driven development, большая [...]

  88. [...] have to keep a lookout for TDD anti-patterns, and always remember.. Testing by itself does not improve software quality. Test results are an [...]

  89. [...] TDD Anti-Patterns James Carr (tags: anti-patterns tdd) [...]

  90. Andrej says:

    The problem of the Nitpicker is that it is not a Unit test at all. A unit test is supposed to test a Unit using the API of the unit. If the system is split into units well, an API of a unit should not change often or contain too many unimportant details.
    Testing a Web interface is not unit testing, because intead of testing units we test the whole application via it user interface.

  91. James Carr says:

    @Andrej

    Indeed! A good unit test would not test the entire web application through the user interface, but rather test a single individual unit of the interface or system.

    However, it is not uncommon in web projects that span multiple teams to come across a JUnit test (being referred to by other devs as a “unit test”) that uses Selenium, HtmlUnit, or some other means to test a specific unit in the whole bang of the system. The problem is it’s an integration test or acceptance test, but the developers end up having it ran with the developer tests.

  92. [...] James Carr (via Ned Batchelder): The Nitpicker: A unit test which compares a complete output when it’s really only interested in small parts of it, so the test has to continually be kept in line with otherwise unimportant details. Endemic in web application testing. [...]

  93. [...] TDD Anti-Patterns: a funny skewering of what people do wrong in unit tests var addthis_pub = ‘huynhtronghoan’; var [...]

  94. The Time Switcher:

    A test case that passes or fails depending on the time zone of the local machine (and/or whether or not daylight savings is in effect). Often fixed by switching the hour so that the test breaks twice a year.

  95. andreifeld says:

    Hi all,
    I’m thinking maybe the next question will be appropriate here in a TDD blog article…

    Let’s presume I’m running unit tests on a mysql DB.
    The DB is required to be up and running and I’m using my application API’s in order to load and save from the DB.

    Now this is very slow as using this API to a persistence DB is very expensive.

    I am not a DBA, but is there a simple way that I can use my existing sql tables as ‘in memory tables’? This will robust my testing framework endlessly.

    thanks

  96. Tim Ottinger says:

    James:

    We produced an index-card-sized version of the list, reduced for space considerations, but any attempt to write up a quick statement on each would be about the same size as your article. Any chance you would agree with me posting this article, or as much as fits, on the site at http://agileinaflash.blogspot.com/2009/06/tdd-antipatterns.html?

    Feel free to use this posting’s email

  97. [...] I discovered an interesting blog thanks to Tim Ottinger’s comment on my TDD Anti-Patterns post, Agile in a Flash. It looks like they’re putting together a comprehensive book plus [...]

  98. [...] TDD Anti-Patterns – I came across a couple of people linking to this article from 2006 by James Carr which lists a number of TDD related anti-patterns. Some of the names are rather light hearted, but the problems they describe are quite serious [...]

  99. [...] TDD Anti-Patterns A list of most common TDD Anti-Patterns with short description [...]

  100. [...] TDD Anti-Patterns James Carr (tags: tdd) [...]

  101. [...] TDD Anti-Patterns Seems to be focused on unit testing, not so much test-driven development. Good points all the same. [...]

  102. имя,имени,человека,людей,руси,значение имени,женские имена,имена бесплатно,мужские имена,совместимость имен,имя человека,имена девочек,тайна имени,имена мальчиковУ каждого человека есть своё имя, но не каждый человек может сказать что
    же значти его имя .
    Не каждый знает происхождение и корни своего имени, хотя это очень важно.

  103. I discovered an interesting blog thanks to Tim Ottinger’s comment on my TDD Anti-Patterns post, Agile in a Flash.

  104. [...] TDD Anti-Patterns [via ISerializable] [...]

  105. [...] platforms as possible to test open source software builds. Finally, a nice old blog entry regarding TDD AntiPatterns. Lot of stuff. Will keep an eye on those (and perhaps contribute to basie, who [...]

  106. argehaber says:

    could not exactly turn some sentences. but in general was a useful article. I have read with admiration. good work.

  107. [...] On another note i read an interesting article last night on TDD anti-patterns… It’s a good read and it taught me that as a newbie I am making a lot of classic TDD mistakes… Check out the article if you have a chance… http://blog.james-carr.org/?p=44 [...]

  108. [...] serves as an introduction and gives few advices/tips. The most interesting thing I've found is an article about TDD Anti-Patterns. It contains a list of anti-patterns, each one with a description. Quite useful, as probably most [...]

  109. Lance Walton says:

    This is a good catalog. But they’re not actually TDD anti-patterns; they are unit-test anti-patterns.

  110. [...] must always kept running. In my eyes it is wrong to demonize TDD. It is more a question of “how shall I use it” instead of “Too Damn [...]

  111. [...] you are interested in some examples, James Carr came up with a list of test anti-patterns. TDD Anti-Pattern Catalogue The Liar An entire unit test that passes all of the test cases it has [...]

Leave a Reply