Link: Advanced Usage of JUnit Theories, Multiple DataPoints, and ParameterSuppliers
Continue reading →
JUnit Tip: Verifying that an Exception with a Particular Message was Thrown
Continue reading →
Correct your URL
Continue reading →
Practical Introduction into Code Injection with AspectJ, Javassist, and Java Proxy
This post is aimed at giving you the knowledge that you may (or I should rather say "will") need and at persuading you that learning basics of code injection is really worth the little of your time that it takes. I'll present three different real-world cases where code injection came to my rescue, solving each one with a different tool, fitting best the constraints at hand.
Continue reading →
DRY: Use JUnit @Rule Instead of Repeating Setup/@Before in Each Test
As true coders you are certainly annoyed by so many words so let get to the source code.
Continue reading →
DbUnit Express 1.3 is Even Easier to Use and Still Better
Continue reading →
Most interesting links of August
Recommended Readings
- Martin Fowler on the problem of software patents - "... while patents (even software patents) are a good idea in principle, in practice they have turned into an unmitigated disaster and would be better scrapped."
- Discovering Hidden Design, Michael Feathers - When refactoring complex code towards a better design with clearer separation of concerns, it may be sometimes worthwhile to factor out a more-less separated cluster of functionality even if it doesn't do just one thing (and this it is difficult to find a descriptive name for it). Comparing the cost and benefit of this and an "ideal" refactoring (where we get to single-responsibility factors), this one may prove better.
- Martin Fowler: Tradable Quality Hypothesis - Martin argues that we must claim that quality in software development is not tradable (even though we know that certain tradeoffs can be done). The reason is that people are used to quality (in food, clothing, ...) being pretty "tradable" and so it is very hard to persuade them that in the case of software development it is tradable much less (or not at all). And once your manager and customers view quality as tradable, you are doomed. They will force you to trade it for time, features, ... in a proportion that will harm the project (because, as already mentioned, in SW it is much less tradable then in other domains).
- Are estimates worthless?& Magne's response - interesting discussion of the value and cost of estimation and its role in contracting w.r.t. trust - a nice addition to the discussion: Agile not suitable for governmental IT?.
- Generate Test Data with DataFactory - nice java library that generate "random" values of different types and optionally satisfying some constraints - f.ex. first/last name (using built-in or custom list), date (within a range, w.r.t. another date, ...), address (cities, streets etc.), email, random text/word/characters, number. Available at GitHub.
Continue reading →
DbUnit Express Tips: Setup Simplification, Custom Data File Convention
Continue reading →
A Funny Story about the Pain of Monthly Deployments
Continue reading →
Most interesting links of July
Recommanded Readings
- Martin Fowler, M. Mason: Why not to use feature branches and prefer feature toggles instead, when branches can actually be used (video, 12min) - feature branches are pretty common yet they are a hindrance for a good and stable development pace due to "merging hells". With trusted developers, feature toggles are a much better choice.
- M. Fowler: The LMAX Architecture - Martin describes the innovative and paradigm shaking architecture of the high-performance, high-volume financial trading platform LMAX. The platform can handle 6 million orders per second - using only a single java thread and commodity hardware. I highly recommend the article for two reasons: First, it crashes the common view that to handle such volumes you need multithreading. Second, for the rigorous, scientific approach used to arrive to this architecture. The key enablers are: 1) The main processing component does no blocking operations (I/O), those are done outside (in other threads). 2) There is no database - the state of the processor can be recreated by replaying the (persistent) input events. 3) To get further from 10k to 100k TPS they "just" wrote good code - well-factored, small methods (=> Hotspot more efficient, CPU can cache better). 4) To gain another multitude they implemented more efficient, cache-friendlier collections. All that was done based on evidence, enabled by thorough performance testing. 5) The processor and input/output components communicate without locking, using a shared (cyclic) array, where each of them operates on sum range of indexes and no element can ever be written by more than one component. Their internal range indexes do ever only increase so it is safe to read them without synchronization (at worst you will get old, lower value). The developers also tried Agents but found them in conflict with modern CPUs for their require context switch leading to emptying of the fast CPU caches. Updated: Martin has published the post titled Memory Image which discusses the LMAX approach to persistence in a more general way.
- S. Mancuso: Working with legacy code with the goal of continual quality improvement - this was quite interesting for me as our team is in the same situation and arrived to quite similar approach. According to the author, the basic rule is "always first write tests for the piece code to be changed," even though it takes so much time - he justifies it saying "when people think we are spending too much time to write a feature because we were writing tests for the existing code first, they are rarely considering the time spend elsewhere .. more time is created [lost] when bugs are found and the QA phase needs to be extended". But it is also important to remember when to stop with refactoring to get also to creating business value and the rule for that is that quality improvements are done only with focus on a particular task. I like one of the concluding sentences: "Constantly increasing the quality level in a legacy system can make a massive difference in the amount of time and money spend on that project."
- Uncle Bob: The Land that Scrum Forgot - Scrum makes it possible to be very productive at the beginning but to be able to keep the productivity and continue meeting the expectations that are thus created we need to concentrate on some essential technical practices and code quality. Because otherwise we create a mess of growing complexity - the ultimate killer of productivity. Uncle Bob advices us what practices and how to apply to attain both high, sustainable productivity and (as required for it) high code quality. It's essential to realize that people do what they are incented to do and thus we must measure and reward both going fast and staying clean. How do we measure quality? There is no perfect measure but we can build on the available established metrics - coverage, # (new) tests, # defects, size of tests (~ size of production code, 5-20 lines per method), test speed, cyclomatic complexity, function (< 20) and class (< 500) sizes, Brathwaite Correlation (> 2), dependency metrics (no cycles, in the direction abstraction). The practices that enable us to stay clean include among others TDD, using tools like Chceckstyle, FindBugs to find problems and duplication, implementing Continuous Integration.
- Getting Started: Testing Concurrent Java Code - very good and worthy overview of tools for checking and testing of concurrent code with links to valuable resources. The author mentions among others FindBugs, concurrent test coverage (critical sections examined by multiple threads) measurement with IBM's ConTest, multithreaded testing with ConTest (randomly tries to create thread interleaving situations; trial version - contact the authors for the full one) and MultithreadedTC (which divides time into "ticks" and enables you to fine-configure the interactions)
- The top 9+7 things every programmer or architect should know - quite good selection of nine, respectively 7 things from the famous (on-line available) books 97 Things every programmer/architect should know.
Continue reading →
Experiencing JSF 1.2: Good but Needs a Framework
Continue reading →
Simple Logging HTTP Proxy with Grinder
Continue reading →
Having Database Test Ready in 10 Minutes with DbUnit Express
DbUnit Express is my wrapper around DbUnit that intends to make it extremely easy to set up a test of a code that interacts with a database. It is preconfigured to use an embedded Derby database (a.k.a. JavaDB, part of SDK) and uses convention over configuration to find the test data. It can also with one call create the test DB from a .ddl file. Aside of simplifying the setup it contains few utilities to make testing easier such as getDataSource() (essential for testing Spring JDBC) and RowComparator.
Originally I was using DbUnit directly but I found out that for every project I was copying lot of code and thus I decided to extract it into a reusable project. Since that it has further grown to be more flexible and make testing even easier.
Here are the seven easy steps to have a running database test:
Continue reading →
Ivy: Retrieve Both .jar And -sources.jar Into A Folder - Note to Self
Continue reading →
Going to Present "Programmer's Survival Kit: Code Injection for Troubleshooting" at JavaZone 2011
Continue reading →