Most interesting links of July '13

This month focuses on languages and approaches (reactive programming, F#, Erlang, FP talks etc.), agile (need for speed, recommended books), Clojure/Linux/cloud tools and libs.

Recommended Readings


Continue reading →

Running A Leiningen/Ring Webapp As A Daemon Via Upstart (Ubuntu)

Running a Java/Clojure app as a daemon on Linux used to be hard but is pretty simple with Ubuntu Upstart (docs). The short story:
  1. Create an all-in one uberjar via "lein with-profile production ring uberjar" (using the lein-ring plugin; a simple lein uberjar would suffice for an app with a main- method)
  2. Create an upstart <service name>.conf file in /etc/init/
  3. Run sudo start/stop/status <service name>
And of course it works with Puppet too.


Continue reading →

Installing & Troubleshooting Google Analytics 2013 (ga / analytics.js)

Setting up the new Google Universal Analytics (still in beta) is not completely obvious. You normally won't be able to send events from localhost and it will claim that "Tracking Not Installed." Here are some tips how to use Analytics from localhost and test it.


Continue reading →

Creating A Chart With A Logarithmic Axis In Incanter 1.5.1

Incanter 1.5.1 doesn't support logarithmic axes, fortunately it is easy to add one manually.

Update: Pushed improved version to Incanter.

This is how our final code will look like:


;; core and charts are the incanter namespaces
(defn plot-power []
  (let [fun #(Math/pow 10 %)
        y-axis (log-axis :label "log(x)")
        chart (charts/function-plot fun 0 5)]
    (set-axis chart :y y-axis)
    (core/view chart :window-title "LogAxis Test: Incanter fun plot")))



Continue reading →

The Invisible Benefits Of Pair-Programming: Avoiding Wasteful Coding Excursions

There has been recently an article about how bad, expensive, and wasteful pair-programming is, since you need double as many developers. It used lines of code (LoC) produced per hour as the main metric. As many have commented, LoC is not the best measure, actually just the opposite, as I want to demonstrate on my experience. (The article is questionable also for other reasons, such as providing no data to back its claims of a pari costing 2.5 times more without any quality benefits, which contradicts f.ex. the studies summarized in ch. 17 of Making Software that show one1 1.6* cost + better quality, other 1.15* cost + 15% less failed tests.)

My main point is that by working with another person that you have to justify your plans to, you can be saved from pursuing suboptimal or unnecessary solution, thus considerably reducing both time spent and lines of code produced (more talk, less [wasteful] code).


Continue reading →

Most interesting links of June '13

Recommended Readings

Agile, process, SW dev, people etc.
Continue reading →

Brief Intro Into Random/Stochastic/Probabilistic/Simulation/Property-Based Testing

John Hughes and Stuart Halloway had very interesting talks about random testing at NDC Oslo, a topic I have been ignorant of but want to explore more now. Contrary to the typical example-based unit tests where the developer specifies inputs, interactions, and specific validations, random testing generates random input data and/or sequences of interactions and the verification is based on more general checks. Random testing can check many more cases than a developer would ever write and cases that a human would never think of. It can thus discover defects impossible to find by the traditional testing, as has been demonstrated f.ex. on Riak.

Random testing typically starts by creating (likely a very simplified) model of the system under test. The model is then used to generate the random data inputs and/or sequences of actions (method calls). Then the tests are executed and their input and output data captured. Finally the results are validated, either against predefined "system properties," i.e. invariants that should always hold true, or manually by the developer.

Related/also known as: generative testingproperty-based testing (a paper).


Continue reading →

Patterns of Effective Delivery - Challenge Your Understanding Of Agile (RootsConf 2011)

Highlights from Dan North's excellent, inspiring, and insightful talk Patterns of Effective Delivery at RootConf 2011. North has a unique take on what agile development is, going beyond the established (and rather limitied and rigid) views. I really recommend this talk to learn more about effective teams, about North's "shocking," beyond-agile experience, and for great ideas on improving your team.

The talk challenges the absolutism of some widely accepted principles of "right" software development such as TDD, naming, the evilness of copy&paste. However the challenge is in a positive way: it makes us think in which contexts these principles really help (in many) and when it might be more effective to (temporarily) postpone them. The result is a much more balanced view and better undestanding of their value. A lot of it is inspired by the theory (and practice) of Real Options.

What are Patterns of Effective Delivery?
Continue reading →

Installing Latest Node.JS And NPM Modules With Puppet

PuppetLabs' nodejs module is unfortunately quite out of date, providing Node.js 0.6, however there is a simple way to get the latest Node:
  1. Install the puppetlabs-apt module
  2. Add ppa:chris-lea/node.js to apt
  3. Install nodejs
  4. Steal the npm provider from the puppetlabs-nodejs module
  5. Install a npm module
Code:


Continue reading →

Making Sense Out of Datomic, The Revolutionary Non-NoSQL Database

I have finally managed to understand one of the most unusual databases of today, Datomic, and would like to share it with you. Thanks to Stuart Halloway and his workshop!

Why? Why?!?

As we shall see shortly, Datomic is very different from the traditional RDBMS databases as well as the various NoSQL databases. It even isn't a database - it is a database on top of a database. I couldn't wrap my head around that until now. The key to the understanding of Datomic and its unique design and advantages is actually simple.

The mainstream databases (and languages) have been designed around the following constraints of 1970s: Datomic is essentially an exploration of what database we would have designed if we hadn't these constraints. What design would we choose having gigabytes of RAM, networks with bandwidth and speed matching and exceeding harddisk access, the ability to spin and kill servers at a whim.


Continue reading →

Ignore requirements to gain flexibility, value, insights! The power of why

I would like to share an eye-opening experience I have recently made. I have learned that if we do not just passively accept the requirements given to us but carefuly analyse the reasons behind them (and the reasons behind the reasons), we gain incredible power and flexibility. By understanding the real value behind it and by discovering other, related sources of value, we might find a superior solution and, more importantly, we gain a few degrees of freedom in the solution space, the ability to scope up or down the solution and optimize it with respect to other solutions. Let's see how a seemingly fixed requirement can be easily expanded or shrinked once we bother to trully understand it.


Continue reading →

Most interesting links of May '13

Recommended Readings


Continue reading →

Tip: Include Context And Propose Solutions In Your Error Messages

A Puppet run has failed with an error message like this:
"No matching selector for 'prod' at some_puppet_file.pp:31"

Continue reading →

Accessing An Artifact's Maven And SCM Versions At Runtime

You can easily tell Maven to include the version of the artifact and its Git/SVN/... revision in the JAR manifest file and then access that information at runtime via getClass().getPackage.getImplementationVersion().

(All credit goes to Markus Krüger and other colleagues.)


Continue reading →

Lesson Learned: Don't Use Low-Level Lib To Test High-Level Code

Summary: Using a fake http library to test logic two levels above HTTP is unnecessarily complex and hard to understand. Fake instead the layer directly below the logic you want to test and verify the low-level HTTP interaction separately. In general: Create thin horizontal slices for unit testing, checking each slice separately with nicely focused and clear unit tests. Then create a coarse-grained vertical (integration-like) test to test across the slices.

The case: I want to test that the method login sends the right parameters and transforms the result as expected. Login invokes post-raw which calls an HTTP method. Originally I have tried to test it by using the library clj-http-fake but it proved to be unnecessarily complex. It would be much better to fake post-raw itself for testing login and test the original post-raw and its HTTP interaction separately, using that library.


Continue reading →

Copyright © 2026 Jakub Holý
Powered by Cryogen
Theme by KingMob