Kent Beck: Best Practices for Software Design with Low Feature Latency and High Throughput

I was fortunate to attend Kent Beck's lecture summarizing his experiences and thoughts regarding efficient software design. Traditionally there have been two schools of thought about design: Predictive design, trying to design everything upfront (and making lot of wrong decisions) and reactive design, where any design is only done if it is absolutely necessary for implementing a feature (thus developing often on top of an insufficient design). Kent tried hard to discover such a design method that really delivers on the promises of both while avoiding their failures. This method is based on evolving design frequently in small, safe steps and focusing on learning while following some key best practices. It doesn't really matter what scope of design we are are speaking about, the method and principles are the same whether you're redesigning a class or a complex system.

What is a good design?

Some of the key factors are low coupling and high coherence, introduced in the book Structured Design similarly to this:
  • Two elements are coupled if whenever the first element (method, class, system) is changed then also the other element has to be changed.
  • Cohesion is the ration of coupling of internal elements, i.e. an element is cohesive if all its internal elements belong together and are strongly related. We cannot avoid coupling but we can try to isolate coupled elements into a single unit (perhaps exposing a simpler interface to the surroundings) rather than having them distributed all over the system because this co-location makes changes easier.
Coupling Regarding coupling we can distinguish a *potential coupling*, i.e. a coupling that actually isn't a problem for that particular system because the coupled elements in reality never change (though, of course, that could change in the future), and a *realized coupling*, where the coupled elements indeed change and have to be kept in synchronization. It's of course the realized couplings that we need to limit. (It should be noticed that in any complex system all the elements are potentially coupled to all the others - f.ex. adding yet another server to your farm may overload one particular switch, leading to failures and timeouts in a remote parts of the system. There is no way to discover these couplings upfront.)

So a good design should be easy to change. Some of the other criteria are easy to understand, supporting the requirements at hand etc. Now when we have some idea of what a good design is, let's go back to the design methods.

Predictive Design

The predictive design promises high througput - you design all that will be needed at once, without a costly trial-and-failure process. However we only rarely really know all that is necessary, the reality is always (much) more complex than envisioned, and thus we end up with an unsuitable, suboptimal design. The start of a project is actually the worst time to make decisions because we won't ever know less about the software than at this point (JH: remember the lean principle of the "last responsible moment" for making decisions, after the most knowledge has been gathered but before it's too late).

Reactive Design

The reactive design promises low latency - instead of wasting days trying in vain to make the perfect design, you just start implementing features right away and adjust and clean the design reactively, when you cannot proceed without changing it. However the low latency is a lie because, as we continue building the software on top of an insufficient design, the development gets slower and slower (the yeasterday's sins make today's sins harder to commit).

Achieving High Throughput and Low Latency

How to achieve both a relatively high throughput and low latency, for real? How to avoid both the cost of making a design decision too late (and thus developing on top of an unsuitable system) and the cost of making the decision too early and being forced to change it later? According to Kent, the best available solution is to design the software incrementally and adjust the design very frequently, applying the following principles:
  1. Make changes in Small, Safe Steps. A safe change doesn't break anything, so it either has to be an automated refactoring (where the IDE guarantees its safety) or you must be pretty sure that it is safe and the affected code should preferably be also covered by a solid and fast test suite. (JH: It must be fast to enable frequent changes.) The safety of changes is the key enabler for the high throughput - you always know where you are, your software is always working, you can always go forth - or back or just stop there.
  2. There are only 4 kinds of these design changesthat we make and thus for approaches:
    1. A simple design change that is safe in itself - then you just make it (e.g. pushing a method to a parent class).
    2. We know what we want to change but it is complicated - use parallel design, i.e. develop the new design while still keeping the old design, having them both side by side for a while. Then, when feeling sure, just switch over to the new design and only after it proves itself, remove the old design. It might seem as a lot of unnecessary work but it is safe and it is constant safety that makes true speed possible. (JH: Which reminds me of the lean realization that local optimization - e.g. making a change quickly - often leads to the whole being suboptimal.)
    3. If you don't know what design you need then simplify - try something simple, ignoring most of the know complexities, with the goal of exploring the domain. You want to learn as much as possible from the change. For example if you should implement a linear algebra system, try first just adding two numbers. Try - observe - learn.
    4. Use stepping stones: If you don't know in which direction your design should evolve or you just cannot get to where you want to be from where you are easily but there is something, which would help you to solve the problem if you had it (a tool, a library, a high-level API, DSL, ...) then create this "stepping stone" first. F.ex. I don't know how to help my uncle, a veggetable farmer, to plan the optimal trip through the local markets, but if I had a way to represent these markets, their profitability, and routes between them in the computer, it would certainly help me to think about the problem further.
With this approach the evolution of the software design becomes an integral part of your development process and with the frequent and safe steps it might look like you are flying when developing.

JH: You might notice that one of the underlying key ideas is that software development is a learning process, where we learn both about the domain, its intricacies, and the pros and cons of possible solutions. The learning is based on experience and thus it is necessary that we can quickly try and test various ideas and get quick feedback on them, throwing them away or continuing developing them afterwards. To avoid the failure of the reactive design, i.e. sticking to a bad design decision for too long, make smaller stepping stones and try to get feedback and real, hard data as soon as possible and act based on them.

BTW, Kent calls this method "responsive design," if you want to find out more about it. You may want to check out first of all this Kent's presentation and perhaps also these detailed slides and a blog post by Carlo Pescio with many valuable links.

Related

Reposted from blog.iterate.no.


Tags: design methodology


Copyright © 2024 Jakub Holý
Powered by Cryogen
Theme by KingMob