Jumping off the waterfall – lessons learned from an agile transition 4

Jumping off the waterfall - watch out for the rocks below!

At the Dublin leg of the 2014 Agile Tour, I recently gave a talk based on my experience leading a team’s agile transition from a classic waterfall approach. When preparing our move to agile, I found it fairly easy to find the success stories from other company’s transitions. However, during our transition, we hit some rocky patches along the way, often making us question the whole transition. So for my talk I focused a little more on the things that went awry for us along the journey – what can happen when a number of smart people make what look like sensible, low-risk calls which turn out to be neither!
We made a lot of mistakes along the way, so for those of you about to embark on a similar path, hopefully this will save you some heart ache. And for those of you who have been through it already, well, you can sit back and laugh in relief at having avoided the mistakes we made! In the end, jumping off the waterfall is worth it, but it’s important to watch out for the rocks below!

Jumping off the waterfall – watch out for the rocks below!

The slides from the talk are embedded below. I put this post together as I’d had a few requests to share them, but the slides aren’t hugely useful on their own – they’re more an aid for me when speaking than designed to stand alone. So this post is an adaptation of my notes for the talk, but without any awkward pauses, umms or ahhhs along the way!


Who am I?
I’m a software developer by trade, focusing on PHP and web development.
I’m currently the CTO at Square1, a web and mobile development company based in Dublin. I previously spent 8 years at Distilled Media, Ireland’s largest indigenous publisher, home of websites like Daft.ie, Boards.ie, Adverts.ie and TheJournal.ie.

As Director of Product & Engineering with Daft.ie, I was responsible for leading the transition from a classic waterfall, big-design-up-front approach to a more agile one. Our waterfall approach had the usual kind of problems with it. Big projects would take an age to complete, turning in to endless death marches with no end in sight. Low team morale, lack of responsiveness when changes were required – textbook problems with the approach. We recognised change was needed, and agile seemed to be the way to go. So away we went, scouring the web for blog posts and videos from others who’d been through the transition, and off we went.

How did it go?

How did it go?
So, how did it go in the end? Pretty well, actually! We ended up getting a more regular release cycle, better quality products and a embedded a testing culture within the team.

Testing was previously done pretty manually, clicking around and seeing what happened. We knew that unit testing was something we should be doing, we had a large, untested legacy codebase on which to do it, but took the opportunity of a move to agile to start pushing more towards a test-focused setup. It’s not TDD yet, but is on the road to that goal now.

So everything worked out as hoped. Great! Is that the end of this short story?

The journey's the interesting part!
Not quite. I watch a lot of TV, too much TV. I like programmes like Columbo or Law & Order. The thing with them is that quite early on, you know how it ends. But that’s not really the point – the interesting part of the story is the journey, the twists and turns along the way.

The Japanese gardens are pictured here also – I enjoy going to the Japanese Gardens in Kildare. The path through the gardens there are supposed to by symbolic of a journey through life. When you enter, there’s a short path to the exit – a couple of metres long, nice flat path, not many plants along the way, could walk it very quickly and easily. The other path is a longer trip around – it has dips and rises, curves, some mucky parts and a few overgrown plants getting in your way. It’s a longer path, but you get a much richer experience from it, seeing a lot more on your way around. So, despite our plans to take the quick and easy route, our agile migration took the longer path, which, whilst trying in places, taught us a lot more than had all gone to plan initially!


So, first a bit of background to our team set-up when we began this transition. We had 7 or 8 developers, split roughly half and half between native mobile app developers and PHP devs handling the apis and site front-end. These teams would typically be working on separate projects also, rarely having any direct overlap. We were also quite siloed between departments – design and development were completely separate, with often a long time between a project’s conception and initial design until development began. There was no distinct test phase, and at the start of the process we didn’t have a dedicated product manager either.

As mentioned earlier, we followed a classic waterfall, big-design-up-front approach, with all the problems this typically entails. The long, death-march projects we found ourselves on would easily end up as projects with no end in sight, projects which the whole team was sick of, and just wanted to be done with. This was neither a recipe for a good product or a happy, productive team! So, change was needed.

A key element of our background here is that we had strong and consistent support from management, right from the top of the company down. It was recognised that there was a need for change, and that the change would likely be more costly in the short term, but the support was there, as the management team had fully bought in. I’ve talked to friends in other companies who have been through similar transitions without this level of support – management are happy for the transition to happen, but more acquiescing to it than fully buying in. So when the growing pains of introducing the new methodology kick in, as they inevitably do, it’s easy for a management team to get anxious and add pressure to the team to roll back to the old way – “wasn’t perfect, but we got stuff done”. Fortunately for us, we had a supportive management team where this wasn’t an issue.

At Daft, we were also our own internal client, which again made the transition that little bit easier. It’s a lot quicker to get a decision from a product owner sitting 5 desks away than playing email tag with someone on the other end of the country for a week or so.

So, that’s our context. We launched in to our agile transition, and then what? Rocky waters ahead..

Lots of change

Lots of change
So, when we moved to our scrum process, we had a lot of things being changed at once – our approach to product development, to product releases, even the tools we were using to track our workload. With all of these changes happening at once we were aware of how much new information we were asking the team to take on at once. So the temptation was to try to lessen the “new” information to be taken on, to try and make the transition easier.

We were previously using Basecamp for tracking tasks and line items left to be done on a release. Our new tool was ScrumWise, and we’d introduced terms like “Backlog” to the team’s vocabulary. At first glance, the basecamp lists we were using were a reasonable analogue for backlog items, so when trying to talk the team through the new process, we’d discuss the new in terms of the old (“a backlog item is a bit like a basecamp to-do list” etc).

In practice this didn’t work quite so well. One of the issues we had previously was that our process wasn’t all that well-defined. So when given a task, Developer A may see it as a basecamp to-do list with a number of individual sub-items to be checked off, whereas Developer B may just see it as a single sub-item on the overall product to-do list. As we’d quite a few rusty cogs in our old system, this wasn’t something we’d identified as one of our main pain points.

So what happened was that we took some of the confusion from the old system and transplanted it straight in to our new system. This ended up causing us issues for a couple of sprints, until we ended up doing a hard reset and only used the new terminology, with no further attempts to draw it back to an old familiar term. The lesson we took from this was that change is tough, but in the end we would have been better taking the pain of a bigger change up-front.

Centre of attention

Centre of attention
We’d a relatively small team when starting on our scrum journey. I’d a strong degree of autonomy within the tech side of the business, so was effectively acting as product owner for the tech team. Additionally, I was facilitating the daily meeting at the time. So we thought, “hey, while we’re trying this new scrum thing and finding our feet, why not have the scrum master and product owner role done by the same person?”. We’d read up quite a bit about different team’s scrum attempts and tweaks they made, this came up in a few places, so thought we’d give it a go.

By one measure, it went ok. For one thing, we had zero latency between the scrum master being asked a question and an answer coming back from the product owner! But pretty soon it became clear that the daily stand-ups were a series of progress reports being fed back to me, with little-to-no interaction across the team. There was an understandable reluctance from some of the team to be 100% open with what was going on – mistakes they may have made, problems they may have encountered – as there’s a concern about being the only one to report bad news to the boss. This often ended up with issues being discovered way later in the sprint than we should have. We weren’t seeing any of the benefit of an open and confident team in these meetings – sharing and challenging each other to drive the product forward.

So I got out of dodge, with one of our senior devs taking up the scrum master responsibility. In some of the more recent scrum recommendations, the product owner has gone from a recommended watching (non-active) brief in these meetings to being barred altogether. This is definitely a change we found worked for the best. From discussing this with friends who’ve been through a similar transition, this type of effect can also happen in teams with very strong or domineering team leads too. A general rule of thumb is that, if a stand-up consists of the team one-by-one turning and reporting to the one person, then the stand-up probably isn’t working quite as well as it can just yet. Ban the boss!


Next up on our tour of “silly things we said and did” – specs. In our days of big, bad waterfall, we would often churn out large, detailed specs. These were soul-crushing, long, soul-crushing, detailed, soul-crushing documents to both generate and follow. And, inevitably, something would be missed in the spec. It wouldn’t be uncommon to be reviewing a product towards the end of a project, find some key cases unhandled and be met with a “wasn’t in the spec” response. Additionally, giving a developer a verbose document in many cases will naturally result in them turning off the creative part of their brain, and rattling through the tasks given them. These are the people who are working on your product 8 hours a day, who know the ins and outs of it. Having their creativity engaged when they run in to an unanticipated problem can be really valuable, and there isn’t much scope for this when massive spec docs are dumped from on-high.

So, specs were a pain for us. The idea of light-weight specs was incredibly appealing – a shorter outline document, walked through with the team at the start so as much knowledge as possible (the “why” as well as the “what”) is shared within the team, and everything will start running more smoothly. Or so we hoped.

Having been burned by the old spec creation process, we embraced the “light-weight” part of “light-weight specs” a little too keenly, and swang the balance a bit too far the other way. We were now producing 2/3 page outlines of the project, and talking them through with the team at the start of the project. However, as we were spending a lot less time on the spec creation, we’d often end up missing certain flows or cases in the product. As the team were still in “take a big spec and build it mode”, we weren’t fleshing these out in the planning meetings. (this also had a knock-on effect to the next rock we hit, which I’ll come on to in a moment). So we’d often end up spending quite a bit of time early on in the sprint going back and forth with the team as these cases came up, costing us more time overall.

In the end, the sweet spot we got to was a 1-1.5 page outline of what we were trying to do, possibly including a user story or two. We’d then augment this with annotated wireframes of what was to be built, with notes on functionality and edge cases (“this happens for users buying property”, “this is only for those selling” etc). This combination seemed to work much better with us as a means of getting all the relevant information in to the team’s hands, without overwhelming them with too much detail.

Boxing clever

Boxing clever
Time-boxing. It’s a fairly fundamental part of scrum, with everything having a defined box – the sprint, the planning, the retrospective, the lot. While time-boxing the overall sprint was something we were focused on from the start of our transition, time-boxing some of the internal events presented a bit more of a cultural challenge.

In Daft, our approach to many problems was “get a bunch of smart people in a room, point them at a problem, and they’ll sort it out”. Sometimes this was a quick process, sometimes less so, but it was something that was fairly baked in to our culture. Suddenly having a stop-watch and cutting off discussion mid-flow was something that felt wrong, so we were pretty relaxed about enforcing the time box on sessions like sprint planning, as there’s always the chance we’re a minute away from a revelation. In all of our reading on scrum adaptation, plenty of teams had “this works for us, but here’s a change we made to fit our company..”, so our thinking was that maybe we could be a bit less strict on some of these time boxes.

One of the off-shoots of our spec mishap mentioned previously is that, as the team began to learn that the specs were not as fully-featured as they were previously, they’d attempt to cover as many cases as possible in the sprint planning session. This would often devolve in to lengthy discussions about small issues, down to incredibly specific detail, and would often have the planning sessions double up as an endurance test! Once the sprint started, we’d often find that we’d spent a lot of time estimating things that turned out not to be relevant, as other more pressing cases present themselves which we hadn’t dissected in detail during the planning.

With these marathon sessions eating up a ferocious amount of time and energy from the team, we got stricter on the time-boxing. And not just the session boxing, but boxing off discussion on individual tasks – no point getting to 1:45 of a 2 hour session and realising you’ve only assigned 20% of your sprint! It was a bit of a cultural change, but one we had to push through with. With the individual task time-boxing, when time was up we’d do our points estimate and attempt to reach a consensus. If we were close, we’d allot another time box and go through it again, but if we hit a roadblock, very often we’d just park it and come back to it later. Often discussing the other tasks on the project would lead to a realisation of an alternate approach to take with those log-jammed tasks when we came back to them later on, or at least another perspective on how critical they may turn out to be for the current release.

So, get yourself a timecop early on, and save yourself a lot of heartache!

What’s the point?

What's the point?
Another issue that impacted our planning sessions was that we were using time-based estimations. This was another hang-over from our previous planning, attempting to answer the obvious question of “how long will this take?” from the rest of the business. Our time estimates would be a rough fibonacci sequence, going from 0.1 days to 0.25, 0.5, 1, 2 and upwards. At the end of the task estimation, we’d then bundle these task estimates up in to related chunks and see what we could deliver in the sprint.

The problem with this approach to estimation was that estimating how long a particular task will take to that degree of accuracy is pretty difficult, and something that people generally are not well-equipped to do. So we’d spend quite a bit of time focusing on very small and specific estimates, trying to pin them down as accurately as possible. In this process, it became very easy to zero in one task to great detail, and later find that the estimate for this task was way out of line relative to other tasks estimated in the same session, meaning we’d occasionally lose the relative view.

On top of this, as bundling up the estimates until we hit our scrum team capacity for the sprint was how we formed our delivery commitment, allowing for things that may take away from this capacity was another issue we’d to spend time discussing. So if we had a general allowance for things that reduced available time, and had a team geared to deliver at, say, 80% capacity in a sprint, we’d then need to think about special things in that sprint like dentist appointments, interviews being conducted and the like – all of which would take from our deliverable capacity, and all of which we’d need to spend time thinking about and incorporating to our planning. There would still be inevitable slippages for things like sick days, which can’t be predicted up-front. So, we were spending a lot of time worrying about the minutes and the hours, and still not seeing a significant improvement in our estimate quality.

During our mis-adventures in agile, we had some in-house training from Ger Hartnett and Colm O’hEocha at which we were introduced to the concept of planning poker. The idea is that, rather than commit to specific time estimates for a task, the team would estimate delivery of a feature relative to another one built previously. A commonly-understood feature would be assigned a basic number of points, and other features estimated against it. So if adding facebook login was a 2 point project, then adding twitter login may be about half as difficult, so is a 1 point project. To estimate a task, it’s first discussed as a group, then all members vote using point cards, again using the fibonacci sequence. The thinking behind this sequence is that small tasks may have little difference between them, but as tasks get bigger it’s less likely we’ll be accurate in our estimates, due to the volume of unknowns. So there’s no point arguing whether a task is 8 or 10 points – if we’re at that high a level of estimation, we round up to the next point mark (13), and later revisit to see if we can break it back down.
Once the team have made their estimation individually, if they’re on the same page, that’s the accepted estimate. If there are outliers, they each get a chance to explain to the rest of the group why they are correct before a re-vote until consensus is reached.

Breaking the link between time and points was a difficult one, with the team occasionally lapsing back in to “1 point, hmm, that’s about a day and a bit”. Once the change in mindset took hold, what it did allow for were more productive discussions about objective task size, rather than focus on hours and minutes.

The points approach also helped with the time we were spending worrying about dentist appointments and the like. The concept of velocity was an averaging of the points delivered by the team over the previous 3 sprints. This tended to average itself out for people being sucked in to unplanned meetings, sick days etc, so removed some of the pressure on that side also.

One thing to note about this approach is that, if the business is not fully agile, there’s going to be some extra over-head on whomever is the functional interface with the rest of the business. If the CEO is asking how long until feature X is ready, they’re likely looking for a date rather than getting a quick talk about points. So being able to mentally translate “the team do 18 points per sprint, that item was 15, so will be done in the sprint starting next Monday” to “expect it to be done by Friday 27th” is a useful skill to develop!

Aside: Eat, Sleep, Sprint Plan, Repeat

Eat, sleep, sprint plan, repeat
As an aside, an observation from the afore-mentioned marathon planning sessions is that low blood sugar levels in your development team can have pretty calamitous impacts on your estimations, so having plenty of short breaks to ensure you’ve a team that’s adequately fed, rested and watered becomes important.

At the start of our planning sessions, we’d have a bunch of tasks jotted down, and then attempt to remove the dupes and get estimating. Occasionally a task would slip through that’s a near-dupe of an earlier one – it may be worded slightly differently, or come at the same problem from a different angle, but is functionally the same thing. From time to time the dupe would slip all the way through and get estimated as one of the last tasks before breaking for lunch, having already been estimated in a similar form earlier on.

When this happened, it would typically turn out that the second estimation would be orders of magnitude off – complexities would be glossed over, potentially-thorny details may not be probed too much, as attention has turned to lunch and getting out of the room. So, even with the time-boxing above, it’s worth working in small breaks within your planning schedule. Often attention spans don’t gradually tail off like a slow descent down a hill. They’re more like Wile E Coyote chasing the road runner, who runs off the cliff, pauses in mid-air, then drops straight down. A hard crash.

It’s also worth trying to schedule your planning sessions to finish at some time other than the end of the day or just before lunch. The last thing you want is to have the few complex issues left to discuss being the only thing between your tired, hungry, decision-fatigued team and their lunch/home!

On a similar note, there was an interesting study done on Israeli judges charged with reviewing parole applications, where a similar trend was observed. The study is worth reading about, but in short, if you’re in an Israeli court, keep your fingers crossed that you’re seen first thing in the morning, rather than at the end of the day!

(The Friends logo is above under “repeat” as, when putting these slides together I was looking for a “repeat” graphic and struggled to find anything better than a browser reload icon. I did a bit of word association with my wife, asking her what the first thing that came to mind was when I said “repeat”. “Friends. Every time you turn on the tv, it’s on somewhere”. She’s not a fan! So that’s the connection there..)

Don’t look back in anger

Don't look back in anger
Retrospectives were the next big learning for us. Getting these performing well was comfortably the most effective part of our whole scrum transition, enabling us to analyse and tweak the rest of our process far more quickly. However, we didn’t quite hit the ground running here either. The eagle-eyed amongst you may have spotted a trend here already!

A bit like eating your greens or going to the gym before work, a retrospective is something that we all know is good for us, but know inside that it’s just going to be terrible. But much like the greens and the gym, it’s never that bad once you start doing it, and leaves you feeling a lot better afterwards. At the start of the scrum journey, we were coming off the back of some pretty big and draining projects. So when we thought of retrospectives, there was some wariness all round. In parts of the dev team, there was a feeling that these meetings would essentially be group performance reviews, so they’d better come prepared with a big list of why they did everything right and forces outside their control let them down. And on the management side, there was a feeling that it’d be a 2 hour kicking from the team – “you didn’t explain this correctly”, “this requirement wasn’t mentioned until quite late in the day”, etc. Both pretty common and understandable reactions to a team new to this concept, and things we had to overcome.

So they were our concerns going in to our retrospectives, but in practice they generally turned out to be fairly reserved affairs. We’d a fairly close team, and there was almost a hesitance to get too detailed on things that had gone wrong for fear of causing problems for their friends, so these sessions weren’t yet as useful as they could be. Something needed to change.

We’d read quite a bit about blameless retrospectives enabling open communication without fear of reprisal/punishment down the line, and that was the point we wanted to get to. So we took some counter-intuitive steps to help push the team towards a more frank communication. At this point we’d recruited a product manager (Colman Walsh of UXTraining.ie), and he and I would begin the meetings by focusing on the things we personally did pretty badly, so “I didn’t think enough about..”, or “I failed to realise how much was in..”, rather than the previous language patterns of “we may not have recognised X early enough”. This had a pretty interesting effect on the dialog in the meetings.

On a number of points the more senior guys would begin to correct us – “I don’t think you made a mistake there, I actually think that I could have..” We’d talked a lot before the retrospectives started about the importance of being honest and them being a tool for learning rather than punishment, but it’d be fair for a team to take these claims with a pinch of salt and still be somewhat hesitant to fully open up about issues they ran to, until they see more senior people in the team leading by example there. A really interesting change this led to was some of the seniors listing things they were unhappy with about their performance, and it almost becoming like a personal buglist or puzzle to solve with each other, getting really engaged with it. Once the team is open about things they feel they could have done better, it became a lot easier to talk about the real causes behind general issues the team ran in to.

So the retrospectives were definitely worth the time and effort put in to get them to an open and honest forum. A useful side note here is that during the retrospective we’d keep a note of what worked, what didn’t and general feedback from the team. It was useful to start sharing these notes amongst the whole team, as after a number of retrospectives it can be illuminating to look for patterns – “this issue sounds like one we struggled with a few sprints ago but haven’t heard it in a while – did we fix it then revert, or is there another cause that triggers these issues?” Being able to take this kind of big picture view was important to us getting through the above laundry list of mistakes we made and getting to a more productive place.

As an aside, I’ve heard a few people talk about retrospectives being a little pointless after every sprint, as “nothing really changes, so there’s no point in having them too often and having everyone repeat themselves”. This would set the alarm bells going for me – if there are consistent pain points and frustrations within a team, it’s a very rare case where absolutely nothing can be done within a 2/3 week period. If you’re eating an elephant, you do it a bite at a time, so as a scrum master in those situations it may be worth looking at the baby steps that can be taken to start down the path to whatever the resolution is, giving the team the sense that something is being done about their issues. And if it turns out that institutionally it’s completely impossible to work with the business to get any of the required change put in place, well, there’s a pretty active tech job market out there at the moment it may be worth re-acquainting yourself with!

Weight of the crown(s)

Weight of the crown(s)

In a small team, having the scrum master also acting as a member of the development team is not uncommon. Usually this role will be taken on by one of the more senior team members, who may already be spending time on mentoring more junior developers, so it can feel like a natural extension of what they’re doing already.

One thing that can be quite easy to underestimate is how much time doing the role of scrum master can actually take. Depending on the project, the amount of work a scrum master has to do beyond facilitating the scrum events can vary wildly (chasing the product owner for answers etc). Because it can vary quite a bit from project to project, it can be tricky to first notice how much time it’s taking, and secondly to allow for it in project estimates. So this is an important one to keep an eye on, as sometimes the scrum master can be so busy greasing the wheels for the rest of the team that they’re the last to spot how much less time they have to work with when wearing their “development team” hat!

Despite it all..

Despite it all..
We made all of these mistakes listed above, and a whole load more along the way. So how on earth did we manage to get through to the positive results I listed earlier on?

The retrospectives were certainly key, particularly once the team were comfortable being more and more honest in them about what was and wasn’t working. From a very early stage, we were quite open with the team about our scrum transition being a constant work in progress, and something we were willing and able to tweak regularly. A key part of this was the team feeling comfortable pushing back when things weren’t working, and us adapting accordingly.

I mentioned it earlier when talking about our context, but the supportive and fully bought-in management team were also key here. Particularly during our struggles listed above, when productivity would actually dip below the old way of doing things, not having a management team hitting the panic button and rolling everything back allowed us to work through our issues and come out the other end.


Overall our transition to an agile development methodology was worth it – we improved the product, the pace of feature release and when moving away from the top-down spec design opened up far more channels for the people working on the front line of the product to help improve it. Regular re-evaluation as well as a willingness to admit a particular approach was just not working and scrap it were key to getting us through our mis-steps above to a point where we got these results.

The important thing to acknowledge is that there’s still a lot of improvement that can be brought to the process, even having been through this journey – it’s a constant process of assessment and adaption – but the journey was definitely worth it, despite some of the rocky water we hit along the way!

4 thoughts on “Jumping off the waterfall – lessons learned from an agile transition

  1. Pingback: AgileTour Presentation Available for Download - AgileInnovation

  2. Reply Ursula Oct 27,2014 1:30 pm

    Great read Paul. I think another thing that really helped us with the transition was using points instead of days, an idea we got when we did some formal Scrum training. I think it took reduced some of the stress involved with estimating tasks.

    I’d be interested to hear what sort of questions/answers you had at the end of the presentation that you gave.

  3. Reply conroyp Oct 28,2014 8:58 pm

    Thanks Ursula – excellent comment on the points! I’ve added a section talking a bit about moving to points from time estimation. Any additional feedback from your recollection’s more than welcome!

    The Q&A afterwards was pretty brief, as the conference schedule was fairly packed. The main ones that came up were:

    – how long did the process take before we could feel that it was starting to work for us?
    I think we’d at least 5-6 months before we started to see real benefits coming through, with further improvements following at a quicker pace after that. This was in part due to our approach of trying to bolt an agile approach on to the tail end of a large waterfall project, which I felt in hindsight set us back a few months

    – how do you start a testing culture growing within a team?
    The first key to it is having an open-minded and ambitious team who want to improve themselves. We had this and it made it all a lot easier. Having some of the team research testing best practice and get the ball rolling with illustrative tests was a start, though to really bed it in to the culture it needs to be foremost in the team’s mind.

    So really simple things like having each pull request require a passing test output – even if no new test was added – was a small thing that really helped us keep testing in mind. Once the culture starts to take hold, it can be self-enforcing – so junior team members feel confident suggesting to more seniors that certain pull requests could do with some more tests, and a virtuous circle can develop that way.

    This was the automated side of things, with user acceptance test (UAT) scripts then introduced as the next step, from a more consumer-facing point of view. These scripts were originally generated by the product manager and responsibility for executing the tests stayed with them, but as the scrum team improved responsibility for these also began to be shared throughout the rest of the team.

  4. Reply Ursula Nov 3,2014 10:09 am

    Excellent commentary thanks Paul 🙂

Leave a Reply