Groovy Grape: Troubleshooting Failed Download
Continue reading →
Most interesting links of Mars '12
Recommended Readings
- ThoughtWorks Technology Radar 3/2012 - including apps with embedded servlet containers (assess), health check pages for webapp monitoring, testing at the appropriate level (adopt), JavaScript micro-framewors (trial, see Microjs.com), Gradle over Maven (e.g. thanks to flexibility), OpenSocial for data & content sharing between (enterprise) apps (assess), Clojure (before in asses) and CoffeeScript on trial (Scala very close to adopt), JavaScript as a 1st class language (adopt), single-threaded servers with aync I/O (Node.js, Webbit for Java [http/websocket], ...; assess).
- Jez Humble: Four Principles of Low-Risk Software Releases - how to make your releases safer by making them incremental (versioned artifacts instead of overwritting, expand & contract DB scripts, versioned APIs, releasing to a subset of customers first), separating software deployment from releasing it so that end-users can use it (=> you can do smoke tests, canary releasing, dark launching [feature in place but not visible to users, already doing something]; includes feature toggles [toggle on only for somebody, switch off new buggy feature, ...]), delivering features in smaller batches (=> more frequently, smaller risk of any individual release thanks to less stuff and easier roll-back/forward), and optimizing for resiliance (=> ability to provision a running production system to a known good state in predictable time - crucial when stuff fails).
- The Game of Distributed Systems Programming. Which Level Are You? (via Kent Beck) - we start with a naive approach to distributed systems, treating them as just a little different local systems, then (painfully) come to understand the fallacies of distributed programming and start to program explicitely for the distributed environment leveraging asynchronous messaging and (often functional) languages with good support for concurrency and distribution. We suffer by random, subtle, non-deterministic defects and try to separate and restrict non-determinism by becoming purely functional ... . Much recommended to anybody dealing with distributed systems (i.e. everybody, nowadays). The discussion is worth reading as well.
- Shapes Don’t Draw - thought-provoking criticism of inappropriate use of OOP, which leads to bad and inflexible code. Simplification is OK as long as the domain is equally simple - but in the real world shapes do not draw themselves. (And Trades don't decide their price and certainly shouldn't reference services and a database.)
- Capability Im-Maturity Model (via Markus Krüger) - everybody knows CMMI, but it’s useful to know also the negative directions an organization can develop in. Defined by Capt. Tom Schorsch in 1996, building on Anthony Finkelstein's paper A Software Process Immaturity Model.
- Cynefin: A Leader’s Framework for Decision Making - an introduction into the Cynefin cognitive framework - the key point is that we encounter 5 types of contexts differing by the predictability of effects and each of them requires a different management style, using the wrong one is a recipe for a disaster. Quote:
The framework sorts the issues facing leaders into five contexts defined by the nature of the relationship between cause and effect. Four of these—simple, complicated, complex, and chaotic—require leaders to diagnose situations and to act in contextually appropriate ways. The fifth—disorder—applies when it is unclear which of the other four contexts is predominant.
- Et spørsmål om kompleksitet (Norwegian). Key ideas mixed with my own: Command & control management in the traditional Ford way works very well - but only in stable domains with clear cause-and-effect relationships (i.e. the Simple context of Cynefin). But many tasks today have lot of uncertanity and complexity and deal with creating new, never before seen things. We try to lead projects as if they were automobile factories while often they are more like research - and researchers cannot plan when they will make a breakthrough. Most of the new development of IT systems falls into the Complex context of Cynefin - there is lot of uncertanity, no clear answers, we cannot forsee problems, and have to base our progress on empirical experience and leverage emergence (emergent design, ..).
- The Economics of Developer Testing - a very interesting reflection on the cost and value of testing and what is enough tests. Tests cost to develop and maintain (and different tests cost differently, the more complex the more expensive). Not having tests costs too - usually quite a lot. To find the right ballance between tests and code and different types of tests we must be aware of their cost and benefits, both short & long term. Worth reading, good links. (Note: We often tend to underestimate the cost of not having good tests. Much more then you might think.)
Continue reading →
Note To Self: What to Do When a Vagrant Machine Stops Working (Destroy or Up Failing)
See also my Vagrant Notes.
Continue reading →
Kent Beck: Best Practices for Software Design with Low Feature Latency and High Throughput
Continue reading →
Link: Benchmark and Scaling of Amazon RDS (MySQL)
Continue reading →
Most interesting links of February '12
Recommended Readings
- List of open source projects at Twitter including e.g. their scala_school - Lessons in the Fundamentals of Scala and effectivescala - Twitter's Effective Scala Guide
- M. Fowler & P. Sadalage: Introduction into NoSQL and Polyglot Persistence (pdf, 11 slides) - what RDBMS offer and why it sometimes isn't enough, what the different NoSQL incarnations offer, how and on which projects to mix and match them
- Two phase release planning - the best way to plan something somehow reliably is to just start doing it, i.e. just start the project with the objective of answering "Can this team produce a respectable implementation of that system by that date?" in as short time as possible (i.e. few weeks). Then: "Phase 2: At this point, there’s a commitment: a respectable product will be released on a particular date. Now those paying for the product have to accept a brute fact: they will not know, until close to that date, just what that product will look like (its feature list). What they do know is that it will be the best product this development team can produce by that date." Final words: "My success selling this approach has been mixed. People really like the feeling of certainty, even if it’s based on nothing more than a grand collective pretending."
- Tumblr Architecture - 15 Billion Page Views A Month And Harder To Scale Than Twitter - what SW (Scala, Finagle, heavily partitioned MySQL, ...) and HW they use, the architecture (Firehose - event bus, cell design), lessons learned (incl. "MySQL (plus sharding) scales, apps don't."
- Jay Fields' Thoughts: Compatible Opinions on Software - about teams and opinion conflicts - there are some areas where no opinion is really right (e.g. powerful language vs. powerful IDE) yet people may have very strong feeling about them. Be aware of what your opinions are and how strong they are - and compose teams so that they include more less people with compatible (not same!) opinions - because if you team people with strong opposing opinions, they'll loose lot of productivity. Quotes: "I also believe that you can have two technically excellent people who have vastly different opinions on the most effective way to deliver software." "I suggest that you do your best to avoid working with someone who has both an opposing view and is as inflexible as you are on the subject. The more central the subject is to the project, the more likely it is that productivity will be lost."
- Jay Fields' Thoughts: Lessons Learned while Introducing a New Programming Language (namely Clojure) - introducing a new language and winning the hearts of (sufficient subset of) the people is difficult and requires lot of extra effort. This is both an experience report and a pretty good guide for doing it.
- Jay Fields' Thoughts: Life After Pair Programming - a proponent of pair-programming comes to the conclusion that in some contexts pairing may not be beneficial, i.e. the benefits of pair-programming don't overweight the costs (for a small team, small software, ...)
- The Why Monitoring Sucks (and what we're doing about it) - the #monitoringsucks initiative- what tools there are, why they suck, what to do, new tools, what metrics to collect, blogs, ...
- JBoss Byteman 2.0.0: Bytecode Manipulation, Testing, Fault Injection, Logging - a Java agent which helps testing, tracing, and monitoring code, code is injected based on simple scripts (rules) in the event-condition-action form (the conditions may use counters, timers etc.). Contrary to AOP, there is no need to create classes or compile code. "Byteman is also simpler to use and easier to change, especially for testing and ad hoc logging purposes." "Byteman was invented primarily to support automation of tests for multi-threaded and multi-JVM Java applications using a technique called fault injection." It was used e.g. to orchestrate the timing of activities performed by independent threads, for monitoring and statistics gathering, for application testing via fault injection. Contains a JUnit4 Runner for easily instrumenting the code under test, it can automatically load a rule before a test and unload it afterwards:
@Test @BMRule(name="throw IOException at 1st call", targetClass = "TextLineProcessor", targetMethod = "processPipeline", action = "throw new java.io.IOException()") public void testErrorInPipeline() throws Exception { ... } - How should code search work? - a thought-provoking article about how much better code completion could be if it profited more from patterns of usage in existing source codes - and how to achieve that. Intermediate results available in the Code Recommenders Eclipse plugin.
Continue reading →
Profiling Tomcat Webapp with VisualVM and NetBeans - Pitfalls
Continue reading →
Cool Tools: Fault Injection into Unit Tests with JBoss Byteman - Easier Testing of Error Handling
@RunWith(BMUnitRunner.class)
public class BytemanJUnitTests {
@Test(expected=MyServiceUnavailableException.class)
@BMRule(name="throw timeout at 1st call",
targetClass = "Socket",
targetMethod = "connect",
action = "throw new java.io.IOException()")
public void testErrorInPipeline() throws Exception {
// Invokes internally Socket.connect(..):
new MyHttpClient("http://example.com/data").read();
}
}
Continue reading →
Release 0.9.9 of Static JSF EL Expression Validator with Annotated Beans Autodetection
Continue reading →
Using Java Compiler Tree API to Extract Generics Types
It might be best to go and check the resulting 263 lines of CollectionGenericsTypeExctractor.java now. The code is little ugly, largely due to the API being ugly.
Continue reading →
Separating Integration and Unit Tests with Maven, Sonar, Failsafe, and JaCoCo
The first part - executing IT and UT separately - is achieved by using the maven-failsafe-plugin and by naming the integration tests *IT (so that the unit test running surefire-maven-plugin will ignore them while failsafe will execute them in the integration-test phase and collect results in the verify phase).
The second part - showing information about integration tests in Sonar - is little more tricky. Metrics of integration tests will not be included in the Test coverage + Unit tests success widget. You can add Integration test coverage (IT coverage) widget if you enable JaCoCo but there is no alternative for the test success metrics. But don't despair, read on!
Important notice: The integration of Sonar, JaCoCo and Failsafe evolves quite quickly so this information may easily get outdated with the next releases of Sonar
Versions: Sonar 2.12, Maven 3.0.3
Continue reading →
Troubleshooting Jersey REST Server and Client
Well, I don't know the ultimate solution but want to share few tips.
Continue reading →
Most interesting links of January '12
Recommended Readings
- Jeff Sutherland: Powerful Strategy for Defect Prevention: Improve the Quality of Your Product - "A classic paper from IBM shows how they systematically reduced defects by analyzing root cause. The cost of implementing this practice is less than the cost of fixing defects that you will have if you do not implement it so it should always be implemented." - categorize defects by type, severity, component, when introduced; 80% of them will originate in 20% of the code; apply prioritized automated testing (solve always the largest problem first). "In three months, one of our venture companies cut a 4-6 week deployment cycle to 2 weeks with only 120 tests."
- Ebook draft: Beheading the Software Beast - Relentless restructurings with The Mikado Method (foreword by T. Poppendieck) - the book introduces the Mikado Method for organized, always-staying-green (large-scale) refactorings, especially useful for legacy systems, shows it on a real-world example (30 pages!), discusses various application restructuring techniques, provides practical guidelines for dealing with different sizes of refactorings and teams, discusses in depth technical debt and more. To sum it up in three words: Check it out!
- Daily Routine of a 4 Hour Programmer (well, it's actually about 4h of focused programming + some hours of the rest) - a very interesting reading with some inspiring ideas. We should all find some time to follow up the field, to reflect on our day and learn from it (kaizen)
- The Agile Testing Quadrants - understanding the different types of tests, their purpose and relation by slicing them by the axis "business facing x technology facing" and the axis "supporting the team x critiquing the product" => unit tests x functional tests x exploratory testing x performance testing (and other). It helps to understand what should be automated, what needs to be manual and helps not to forget all the dimensions of testing.
- Adam Bien: Can stateful Java EE apps scale? - What does "stateless" really mean? "Stateless only means, that the entire state is stored in the database and has to synchronized on every request." "I start the development of non-trivial (>CRUD) applications with Gateway / PDOs [JH: stateful EJBs exposing JPA entities] and measure the performance and memory consumption continuously." Some general tips: Don't split your web server and servlet container, don't use session replication.
- Brian Tarbox: Just-In-Time Logging - How to remove 90% of worthless logs while still getting detailed logs for cases that matters - the solution is to (1) only add logs for a particular "transaction" with the system into a runtime structure and (2) flush it to the log only if the transaction fails or st. else significant happens with it. The blog also proposes a possible implementation in detail.
- DZone's Top 10 NoSQL Articles of 2011
- DZone's Top 5 DevOps Articles of 2011
- Test Driven Infrastructure with Vagrant, Puppet and Guard - this is interesting for me for I'm using Vagrant and Puppet on my project to create and share development environments or their parts and applying test-first approach to it seems interesting as do also the tools, rspec-puppet, cucumber-puppet and Guard (events triggered by file changes) and referenced articels.
- 5+1 Sonar Plugins you must not miss (2012 version) - Timeline Plugin (with Google Visualization Annotated TimeLine), Useless Code Plugin, SIG Maintainability Model Plugin (metrics Analysability, Changeability, Stability, Testability), Quality Index Plugin (1-number health indicator), Technical Debt Plugin
Continue reading →
How to Create Maintainable Acceptance Tests
The key elements that contribute to the maintainability of acceptance tests are:
- Aligned business, software, and test models => small change in business requires only a similarly small change in the software and a small change in tests (Gojko Adzic explains that very well in his JavaZone 2012 talk Long-term value of acceptance tests)
- The key to gaining the alignment is to use business language in all the three models from the very start, building them around business concepts and relationships
- Testing under the surface level, if possible
- Prefer to test your application via the service layer or at worst the servlet layer; only test on the UI level if you really have to and only as little as possible for UI is much more brittle (and also difficult to test)
- The more you want to test the more you have to pay for it in the terms of maintenance effort. Usually you decide so that you cover the part(s) of the application where the most risk is - the best thing is to do cost-benefit evaluation.
- Isolating tests from implementation by layers of test abstraction
- Top layer: Acceptance tests should only describe "what" is tested and never "how" to test it. You must avoid writing scripts instead of specifications.
- Layer 2: Instrumentation - right below the acceptance test is an instrumentation layer, which extracts input/output data from the test and defines how to perform the test via a high-level API, provided by the next level (we could say a test DSL) such as "logInUser(X); openAccountPage();"
- Layer 3: High-level test DSL: This layer contains all the implementation details and exposes to the higher layer high-level primitives that it can use to compose the tests without depending on implementation details (ex.: logInUser may use HtmlUnit to load a page, fill a form, post it). See the PageObject example below.
(And of course many, if not all, of the rules for creating maintainable unit tests apply as well.)
Continue reading →
Visualize Maven Project Dependencies with dependency:tree and Dot Diagram Output
Continue reading →