Lombok

Java developer? You feel class and method declaration is cumbersome? You want to follow Mr Bloch‘s advice but fret verbose boilerplate code? Look no further than Project Lombok!

Lombok helps you as a Java developer, with mundane tasks such as creating no-args constructors, getters/setters, toString/equals/hashCode-generation and more. All you need to do is to annotate your classes, as described in the manual.

For example, if you put the @Data annotation on your class, you’ll get generated toString, equals, hashCode, getter and setter methods, AND a constructor taking all required fields as parameters. Phew!

Behind the scenes, these are generated by code from the lombok jar-file, which must be on the classpath. Plugins exist for most IDE’s, so it’s easy to get going, here’s what you need for IntelliJ:

  1. Install the lombok plugin from the plugin repository.
  2. Add the maven dependency org.projectlombok:lombok version 0.11.6
  3. Annotate your data classes, and use them, example:
@Data
public class Person {
    private final String name;
    private int ageInYears;
}

public class PersonTest {
    public static void main(String[] args) {
        Person person = new Person("Christian Fogel");
        person.setAgeInYears(145);
        System.out.println(person.getName() + "," + person.getAgeInYears());
        System.out.println(person);
    }
}

The above would, surprisingly, not only compile but also print

    Christian Fogel,145
    Person(name=Christian Fogel, ageInYears=145)

Pretty neat isn’t it? As usual, don’t forget to read the fine print, test your classes thoroughly and beware the development status of the project (as indicated by the version number).

Just Read: The Two Second Advantage

Coincidentally, the topics in the book “The Two Second Advantage” by Vivek Ranadivé & Kevin Maney, are both interesting and fascinating out of an Agile perspective and I’ll discuss this here in my impression of the book.

The book tries to show how some parts of human talent comes from wiring the brain in such a way that it perceives things just a little bit ahead of time. Phenomenal individuals, such as hockey virtuoso Wayne Gretzky and pickup artist “Mystery” are brought forward as examples of humans having acquired such talent. The authors argue there a number of factors required for this wiring to take place along with the now famous 10 000 hours of deliberate practice. To be able to predict the future, or assess situations, very quickly, the authors argue that the brain does something called “chunking”. This means that the brain, upon repetition or practice, over time builds up a “chunk” of knowledge of a certain topic. It can then use this chunked, mental model for predictions instead of having to look at all available data stored in the brain’s neurons. Supposedly, this is partly due to the substance myelin building up along frequently used neurons.

All this is quite fascinating but has a focus on the human body and its capabilities. However, these topics are just the beginning of the book and the authors soon progress to transpose their arguments to the business world. Here it starts to get even more interesting for us in the software business. They talk about the data explosion going on in today’s connected world, and about how 20th century solutions to finding important statistics and other information from what is gathered in a company’s servers, are simply not enough anymore.

Conventional data mining and analysis has a couple of problems namely that it deals with how things used to be and not how they will be. With the amount of stored data growing, there are also situations where it simply cannot be mined fast enough.

So, what has all of this got to do with Agile? Well as I see it, companies taking the necessary steps to introduce forward looking software, to gain a prediction advantage over competitors will need to have an Agile organization. They will need this because when their new predictive system tells them what to do, they must be able to pivot, and quickly put the knowledge to use before it gets old and useless. If done right, that must be the nirvana of Agility. To have a brain-like software constantly mining the business related data to predict what customers or clients will want to have in the future, and have the organization and delivery procedures ready to carry out the necessary changes ahead of time, with exactness.

The book is short, on topic and connects the workings of the mind with 21st century business processes, as well as includes a vast amount of fascinating stories and references. You can say it gets your mind spinning, and it’s worth a read.

Just Read: Googled

In ‘Googled‘, by Ken Auletta you get to follow the journey that Sergey Brin and Larry Page undertook when creating one of the true giants among modern media companies. The book describes the startup of Google, the growing years and onto the later years of dominance as it stands today.

Apart from this, Google’s several legal battles are in focus. An example of this is the long winded struggle with publishers over the Google Books project, which intends to scan every written book into digital form. Another problem for Google, that is described in the book, is the occurrence of government influence on the company, and the need for Google to put a larger and larger lobbying group in Washington D.C. to deal with such influences.

The author also spends a vast amount of pages on the impact Google is having on the rest of the world in general and ‘traditional media’ in particular.

Well, this isn’t a proper review but more of an extended opinion, but the above outlines what the book is about. It’s a good book. The most entertaining parts are found in the story sections, where the reader follows the creation of a giant tech company. The author has great details on everything from Brin and Page living together in an apartment among servers and fast food boxes, to the argumentation of Google’s spectacular employee benefits. The book details relationships and decisions at the highest level of the hierarchy, where Eric Schmidt   was taken aboard as CEO, and it introduces the reader to the mercurial mediator ‘Coach Campbell‘.

The sections after the story one are also good, but here the author seems to lose focus on Google itself, to instead wade through masses of references to various other events in the media sector, the non-transformation of ‘traditional media’ along with segments that ask questions about Google’s struggle with maintaining its “Don’t be evil” facade. Unfortunately, even though great work must have been done to source all these references, it’s easy to get confused and find a red line through all of it.

As a summary, I like the first part best and would have been happy with a shorter book, or even more details on Google’s story and the company itself. I would not say the other parts are a waste – they provide a very good overview of the media landscape created in Google’s wake – they are just not as entertaining. So, whatever the case, the book is a ‘must read’ for me overall due to the author’s detailed description of the spectacular story and world impact of Google, the greatest disruptor!

 

Elevate: Continuous Delivery

This Monday, Neil Ford of ThoughtWorks held a talk about Continous Delivery at an Avega Elevate. I wanted to give a brief recap of this, not only due to the quality of the lecture but also because the topic touches on issues about moving organizations towards a Lean or Agile process.

With continuous delivery you strive to keep track of every change in your product. Not only does this mean to version control the code but it also includes keeping track of changes to the platform you are running the system on, the dependencies or 3rd party libraries, the application configuration files, the database schemas etcetera. You strive to build and thouroughly test each of these changes made to your product, so that there is always some deliverable ready to be shipped to a potential customer.

Now, continous integration in some shape or form is something I think most teams do today, or at least that’s my experience from the last couple of years. But what Ford argued is that you can now, with the increased quality in tools, automate a great deal of the, usually arduous, process of acually building and deploying your product. The idea here being that if you have total control of what components your product consists of, and you continously build and test it against a homogenous environment, you will be able to not only deliver, but also deploy, your product to production at every change in the product.

I won’t delve further into contious delivery here (I might retread the topic), because I think Ford hinted at other benefits of this than minimizing human resources and effort. If you really want to have an Agile organization, you would ideally want to be able to change your mind on a whim, and cast your product straight in the opposite direction of where it’s going, wouldn’t you? Ford didn’t like the abundant buzz around the word ‘pivoting’ but I’ll use it here, because it’s a very good word for this. How much time does it take for your organization to put a change of one line of code into production? If you have two companies in which one can deliver four times a year, and one can deliver 365 times a year; which one would most likely be most competitive? For me, this is a huge thing to think about when striving for agility.

I’d also like to mention the way Ford described the way his company worked when producing software. Everything needed by the developers, including IDE, machine setup (Ford reminded us a couple of times that yes, nowadays machines can be built from source code) was in version control. At the office, the work stations each had an exact copy of the version controlled environment so that every developer used replicas when developing. Furthermore, each day a work station would be manned by a pair of developers, pair programming for that day. The next day, the pairs were rotated and each developer might end up at a different work station where a new developer constellation would take on the challenges of software production (I should mention that each employee also had their own laptops that could contain personal software such as Itunes etc).

What a great way of incorporating XP practices, Agile, and team learning and interchange!

Agile Life: Transparency

Ever wondered what benefits Agile and Lean methods such as Kanban actually provide? Ever wondered what transparency ever did good to the world? I’ll try and provide an example right here, straight from my own ordinary life!

My cohabitant uses a medication. She’s supposed to take it every morning, and that will make her feel better. If she doesn’t take it, bad things can happen during the day. Not necessarily, but they might. If she takes more than one, it’s apparently not A-ok either.

A while ago, I found this thing (photo below), lying on the bathroom sink. Now, if you’ve looked at the picture (and know Swedish) I hardly need to explain where I’m going with this or what the benefits of transparency are in this particular case. But hey, let’s say for the sake of it, that the picture isn’t showing up in your browser.

Transparency Tool!?

The thing she brought home is a plastic box, with seven compartments, one for each day of the week. She fills these, one pill in each, so that each time she takes a look at it she knows whether she’s taken the pill that day or not.

You know where I’m going with this now, don’t you? Since this box is in OUR bathroom, I happen to see it too, every time I visit the place. What do you think happens when I see a pill in the Wednesday compartment and it’s Wednesday afternoon?

Another effect of the box in this simple analogy, is that I, an external part of the process, become much more aware of the details of it. Immediately after she started using the box I recognized how often she takes the medication, how many pills she takes each time, the size of them, etcetera. Questions I previously couldn’t have given answers to when asked about. In effect, she’s shared the process with me, in a vary unobtrusive way…

Right, so I reckon transparancy works like this also if you scale it up, and use it in software development. Wouldn’t you agree?

Don’t Underestimate…

You know it happens all the time. You know you’ve done it yourself. Indeed, I’ve done it too: Underestimated a story or task that is. This may seem like an innocuous thing at the time it happens, but of course it can have pretty dire consequences.

In my experience, people tend to underestimate tasks for a whole range of reasons. It easily happens in groups or teams where the members aren’t fully secure where they stand on a professional level compared to the other members. Many people hesitate to tell a room full of new acquaintances that their estimate is way off target, and fail to realize that precisely that may be the ‘senior’ thing to do. Obviously, newly created teams are prone to this issue.

There’s also the issue of pressure on developers to ignore risks in certain situations. When your estimates decide whether your company will get that ‘fat’ contract – a job you, as a developer, are desperate to land due to the nature, size, importance or technical aspects of the project – it may feel like the most natural thing in the world to underestimate. Pressure can also come from management or from expectations on a person elevated to ‘architect’ status to perform miracles in no-time.

Yet another another underestimation trap to fall into is our tendency of pure wishful thinking. Nothing bad could possibly happen to my progress when using this pre-alpha document database, I promise!

I’m not talking about zen agile projects that have the same team running for years, in a business area where the money is flowing and everything is hunky-dory. The software world is not that perfect, and the above mentioned constellations of new teams, short-term projects and up-front estimates are not uncommon.

Anyway, there’s a scientific term for this, namely ‘Planning Fallacy‘, which automatically makes it more comfortable; we know others are experiencing the same thing. Unfortunately that doesn’t mean we want this ever to happen.

Thinking back, I remember a smaller contract project that a client requested offers for. I estimated the total effort to something like 50% more than the winning bidder. The project was supposed to run for about two to three months for a couple of developers. A year later the client had finally received an acceptable delivery. However, the code base of the product was not in a fully coherent state and thus my company got the chance to continue development. In the end, it would probably have been more profitable for the client to scrap the code base and restart from scratch.

If I would list a few recommendations to alleviate the problems, it would look something like this:

  • Management and project leaders or other stakeholders should try and conceal any preliminary budgets or estimates to the team. Otherwise developers could easily be influenced.
  • Architects and senior developers must take responsibility of having the mature view of the estimation of tasks. They should have the best apprehension of risk, and should try not to jump to conclusions.
  • The team should learn and use available estimation techniques, such as Planning Poker.
  • Everyone involved should be aware of that in the long term, it’s more profitable both for the client and the contractor, to cancel work that have very little chance of being completed within the given budgets.
  • It’s very hard to overestimate things by a large margin – stay on the safe side. Even if you manage to meet your daredevil estimates, will you be comfortable with the delivered code quality?

It’s not easy being a client; it’s hard to know which contractors can be trusted and which cannot. If you’re a manager looking at budgets there is an immediate pressure to find bargains. Developers also have a hard time, contractor or not, with the issues described above. But the next time you estimate something, anything, do our business a favour and don’t underestimate the cost of developing working software.

Responsive Web Design + HTML5

This Thursday there’s a promising Avega Elevate session on ‘Responsive Web Design and HTML5′. We start at 5.30pm at Avega in Malmö. If you want to join, PM my LinkedIn account, spots are limited, lecture is in Swedish :o.

Running Selenium IDE Against Different Environments

When running test suites inside Selenium IDE, you can set the ‘Base URL‘, which is basically the domain part of the site under test. If you want to be able to run the same test suite against various environments, such as ‘test’ or ‘stage’, you obviously want to run with different base URLs. However, it’s not that obvious how you accomplish this, and it’s not well documented either.

What may happen if you try to just change the base URL is that the first test case in your suite runs against the new setting while the rest run against previous settings. The cause of this lies in the fact that each test case can have a hidden base URL, as shown below!

What you want to do is to simply remove the complete line with the link element. Then Selenium will use whatever you write in the ‘Base URL‘ field when running the test suite. Handy eh?

Personal Kanban

Personal Kanban

I’ve been doing personal Kanban at my current assignment for the last two/three months. A colleague had previously told me he was using it at home, and I wanted to see if I could benefit from it at work. Now, I didn’t actually look up the ‘correct’ way to do this, but instead opted to start with my current conception of the method. There was a leftover piece of cardboard laying around, so I took that and drew two lines on it for the ‘todo’, ‘doing’ and ‘done’ phases. On the bottom of the board, between the doing and done phase, I put my own ‘definitions of done’ to remind me of what it should mean to complete a task. The criteria I chose initially were

  • Reviewed (by myself),
  • Unit tested,
  • Javadoc written,
  • SoapUI/Selenium tests created and run,
  • Maven build and tests run,
  • Code committed,
  • Time spent reported into the time reporting software.

You can take a look at the picture below for the setup. Nothing revolutionary of course… I chose a WIP-limit of two and that’s been quite sufficient so far.

Personal Kanban Board

My experiences thus far are that I have become a little bit more structured and focused; I’m more aware of my current work load; I’m a little bit better at keeping track of tasks (the smaller ones that can easily be forgotten); and I’ve got an easier time filling out the time report for each day. Of course, the benefits of the method have not revolutionized my work day, but with the minimal effort involved I’d say the experiment is a success.

I’ve learned some lessons: that the company supplied post-it notes keep falling off the board, so I should look into buying some top quality ones. I also found that I sometimes put up a task on the board of too large scope, and that I failed to complete it quickly enough. That meant it clogged up the doing lane, while I had to expedite other urgent matters. I should have broken the task down into smaller ones, clearly!

As an experiment, trying out personal Kanban is something I can recommend. You should see the mentioned benefits quite quickly, and if you’re new to Kanban, you should get to familiarize yourself with the concepts. If you want to know more about this please visit Jim Benson’s Personal Kanban site!

Shelving Changes

Imagine that you’ve been working for a while on a particular feature, and that the code is versioned under Subversion or some other VCS. You’ve made changes in a couple of files but you’re not yet ready to commit. Then suddenly you need to quickly switch to another task because someone in your team needs a bug fixed. Or perhaps, you need to fully switch focus to another feature, because unfortunately someone re-prioritized in the middle of your iteration! This could get messy, especially if the features intersect code-wise. I bet you have (at least some time during your career) copied the modified files to a location outside your project for storage, and then merged them manually when the time came to start working on the feature again (it’s OK, noone can see you nod :-)).

A decent way of solving this situation in IntelliJ, is to use the neat little feature Shelve Changes. It works sort of like an extra local layer of VCS. Just go to the Changes tab, right-click your modified files and then Shelve the Changes. This puts them in a shelf on the file system, and removes them from the list of modified files. Now you can continue working as usual, committing and updating your local repo until you want to Unshelve the Changes. Should there then be any conflicts, IntelliJ provides a merge tool.

Obviously, this is even neater if you start using Change Lists properly. Then you have all the modified files grouped under the change list, and you can then easily Shelve the entire change list.

Finally, a word of warning since this type of local versioning is prone to all the usual problems with having local copies. It’s best used in rare cases when options are limited, and you are prepared to potentially losing all the changes due to a system failure.