Lessons Learned from BCS SIGiST Summer 2013 Conference

Thought I could kill two ‘proverbial’ birds with one ‘proverbial’ stone here and document the latest conference I attended while also writing my first post on here in 2 years!

It was an early start, the team of us that were going met at the rail station and got the train into London Marylebone. A short walk to the venue and we were there ready to start. A quick cup of tea and a biccy, then into the lecture theatre for the opening keynote:

The Divine Comedy of Software Engineering by Farid Tejani (Ignitr Consulting)

Farid confidently presented a talk which outlined the changes that many industries have faced over the past decade, industries which have been ‘disrupted’ by the digital world and stating that “Instability is the new Stability“. Examples of Blockbuster Videos and AOL ‘Free Internet’ CDs were used as examples for how industries have been disrupted. He then said that Software Engineering could be the next industry to be ‘disrupted’ by the digital revolution. If I understand what Farid meant here, it was that unless we adapt and change with our surroundings, companies will be left behind as their function could become obsolete (much like Blockbuster Videos).

He then turned to Software Testing and discussed how we were going wrong as an industry, suggesting some common misconceptions about testing which need to be corrected such as the following:

Testing is a risk mitigation exercise
Testing is an efficient way of finding defects
Testing adds quality to software

He listed 9 of these in total and explained why he felt these were myths. Farid then covered the ever-contentious issue of using agile testing over waterfall/v-model and how he thought that all companies need to move to a more ‘agile’ model to avoid being disrupted in the future. Personally, I can see what Farid is suggesting here, however there will always be companies and industries which will continue to use non-agile methodologies purely because they have to or they are mandated to (such as Aviation industry). That doesn’t mean they can’t evolve the way they work within their confines though, there needs to be a better feedback loop at each stage of a project/product/programme.

Overall, it was an interesting talk and the obvious message that came out of this was to ensure that as Testing professionals, we keep ahead of the curve so to speak and ensure we are aware of changes that may affect our ‘Multi-Billion $ Industry

Then time for a tea break and another biscuit or two (maybe there were muffins at this point too :-))

Expanding our Testing Horizons by Mark Micallef (University of Malta)

Mark Micallef has experience in the industry having managed the BBC news testing team amongst other roles and has now moved to Academia to lecture on and research Software Testing. He brought his knowledge to SIGiST to talk about some new ideas/techniques.

Mark started the talk by saying that having attended many conferences, he has found that the industry seems to be continuously talking about the same topics – Agile, Automation and Test Management amongst others. He felt that as industry professionals, we should be striving to identify new and better ways to test and find ‘The Next Big Thing’ which was Web Automation back in 2007. It was suggested that this could happen by the industry working closer with Academics. There may be obvious differences with the way the two industries work but there would be benefit for both sides. Academics tend to be all about the understanding, they need to 100% know exactly what they intend to do. They will also look for perfect, complete and clean solutions. The differences to the Software Industry are that professionals will tend be pragmatic and develop a solution that works and ‘adds value’ rather than being perfect and complete. This obviously doesn’t mean that one is wrong, just they have different ways to work.

He then suggested 3 techniques which could be used to improve testing.

The first insight was “Catching Problems Early, Makes them Easier to Fix”

This was related to Static analysis tools but took it one stage further and suggested that most Static analysis tools will give too many alerts which may lead to cognitive overload and eventually this will lead to the tool or technique being abandoned. This brought us to the suggested concept of ‘Actionable Alert Identification Technique (AAIT)‘. This is the idea that you apply some criteria to the output of alerts such as areas of code you wish to focus on, certain junior developers code you wish to look at or just code submitted in the last week and then by applying this criteria, you will be given a ranking of the alerts in a prioritised order. This will obviously depend on some form of automation set up to perform this analysis but the outcome would be worthwhile.

The second insight was “We Should be Aware of the Effectiveness of Our Test Suites”

Mark started off suggesting that Code Coverage tools should be used to work out test coverage, but that it may be possible (and he showed this using simple examples) that you can achieve 100% line, decision and function coverage without actually testing anything useful, the example Mark showed here had a function meant to multiply two numbers together, the assert had that the answer if you passed 5 & 1 in, would be 5! Simple enough, but if you change the multiply sign (*) for divide (/), the answer if you pass these two numbers in (in the same order) would still be 5. Therefore effectively rendering this test as useless!! The suggested concept here was the idea of ‘Mutation Testing‘. Mutation testing is where you create multiple ‘mutants’ of your product by modifying code then running these ‘mutants’ through your tests, it is then a bad thing if all your tests pass as if nothing has changed. Ideally, even a small change in code should cause at least one test to fail. Once they have failed, you can identify the changes needed to improve the test cases.

This did then highlight some problems with Mutation Testing such as it is expensive to generate mutants and execute the tests. This sounded like a very interesting technique despite the problems and will certainly be something I intend to investigate further.

The third and final insight was “Testing will Never Provide Guarantees that Bugs do not Exist”

Mark talked about how currently testing is seen as a way of following every possible path in the code to improve the coverage. Mark suggested that an additional concept of ‘Runtime Testing, could help. This is where the test is exactly what the user may be doing right now, so this would work well for web pages or apps/programs where there is a lot of user interaction. This will require mathmatically working out the exact paths users may take when using the software/web site and this will not necessarily come naturally to everyone. There may also be performance overheads if you are trying to test and measure what the user is currently doing.

Mark then went on to recommend how working with research and Academia can have huge benefits and that we should look into some of the whitepapers available on different topics to give ideas.

This was a great presentation which did give some new ideas and new ways of identifying ideas by using research by academics.

Requirements Testing: Turning Compliance into Commercial Advantage by Mike Bartley (Test and Verification Solutions)

Following straight on from Mark’s presentation was this one on Requirements Testing. Mike started off by using the example of buying a laptop and how you have a set of things you want when you go to purchase a new one, then when you buy a new laptop, are you able to map the features of the laptop you buy back to the requirements you originally had? It was an interesting point, sometimes people have a pre-concieved idea of what they want and don’t always get exactly what they want.

Mike then went on to software requirements and explained that they are a lot more complex. He then said that poor and changing requirements have been the main cause of project failure for years. Which highlights the need to capture requirements and store them. Mike then asked the question “How do we make sure requirements are implemented and tested?” The obvious answer here would be to ensure that they are tracked and measured as the project carries on. Mike took this one step further and talked about mapping requirements to features and features to implementation and finally to tests. Setting up a uni-directional flow chart of these mappings would highlight ‘Test Orphans’ (tests which don’t map to any requirements, features or implementations) and ‘Test Holes’ (gaps where there are no tests mapping to requirements, features or implementations). This highlighted the importance of knowing the exact purpose of every test and I’m sure most projects which have gone through several releases will have orphaned tests that will not be deleted purely for sentimental reasons. These orphans obviously waste time and effort and of course the test holes will highlight a risk as a requirement will be missing a test.

Mike then went on to talk about Requirements Management and how there is a reasonable amount of tools which provide the ability to map requirements to features, designs, units and even to code but not necessarily to tests or at least not with the ability to show test status or results.

The presentation then went down the track of showing how SQL can be used to create bi-directional requirements mapping meaning we can relate test status back to requirements themselves. This sounds like a useful idea which can bring a good Return on Investment, although there would be an initial cost to build the database, and to add all requirements and test information in to the DB. But the potential for the business advantage is massive, all holes and orphans could be identified quickly along with the analysis of the risk and impact.

The potential of this idea is huge and again is something that will require some additional investigation.

It was then time for lunch and have to say the food was delicious. Moroccan Lamb with wedges and salad, followed by Eton Mess! We did a little bit of networking and then decided for half an hour, we would go and wander around Regents Park and get some fresh air.

Improving Quality Through Building a More Effective Scrum Team by Pete George (Pelican Associates)

With my interest and knowledge in the Scrum area, this talk was probably the one I most looked forward to before the conference. Pete George clearly knows his stuff and was able to keep everyone interested throughout his talk. He started off by talking about the “Marshmallow Challenge” (http://marshmallowchallenge.com/Instructions.html) and how it is a useful challenge for teams to try. He then showed a graph about how different industries had fared in the challenge, obviously engineers and architects came out on top as they were able to build the highest structure but the next highest was ‘Kinder Garten children’, this showed more about the techniques used by the other teams, which would generally be a case of spending 90% of the time building a structure out of spaghetti, then putting the marshmallow on the top with about a minute to go and watching it fall, then quickly putting a structure together in the last 30 seconds that holds the marshmallow. The difference with the kinder garten kids was that they would use a different strategy completely and just build a stable structure with the marshmallow, then continue to enhance (effectively doing agile without realising it!). This was an interesting exercise which may be worth trying with our team.

Pete then gave a brief succinct overview of the scrum process using the 4 Artifacts – 3 Roles – 4 Ceremonies description. He then talked about the fact that with the continuous inspect and adapt approach, there was quality control built into the scrum framework. But the fact that Pete addressed next was the crucial point, the team is only as good as the people in that team. Companies spend all the money they have on changing the process the teams work in but if the teams aren’t working well together then projects will still fail. The problems will be due to the fact that some people are set in their ways and refuse to change, these may be crucial people technically, but when it comes to teams, they don’t seem to fit. This is where it comes down to things such as team building (socialising as a team), but also looking at the Belbin Team Roles theory. The idea being that any good team will cover off all 9 roles shown below:

Belbin Team Roles

There are some interesting roles here, in my head, I’m already trying to identify who is which role within my own team.

It was a useful talk, obviously Pete had to aim the talk at people who didn’t know a lot about agile/scrum as well as ones who did and I thought he did a great job at covering both camps.

Mission Impossible: Effective Performance Evaluation as Part of a CI Approach by Mark Smith (Channel 4) and Andy Still (Intechnica)

This talk covered how Channel 4 took an approach to include performance testing within their Continuous Integration setup to get constant feedback on their performance. Andy Still from Intechnica started off the presentation talking about the CI approach. He then mentioned that the first thing to do when considering Performance testing for Continuous Integration is to ensure that Performance is considered as a first class citizen, it should be treated as important as any functional requirement. Performance requirements should be defined and documented alongside these functional requirements at the start of the project. Andy then said that it was important that the tests should have realistic pass/fail relevant to the stage of the project and the tests should be able to run without any human interaction/interpretation. The next point was an interesting point, Andy mentioned that Performance is a Linear problem not a binary one, meaning that the check in that broke the build, may not be the one which caused the failure, it may be that the last check-in ‘tipped the scale’ but the previous one may have pushed the scales close to the limit from being nowhere near the limit. Andy then started a small debate by asking whether the real challenges to successful implementation of performance tests within CI were down to process or tools. He presented both sides of the argument well and left us to decide. Personally, I was split as I can see that both Process and Tools provide issues that could prove large challenges for teams trying to set this up.

Andy then handed over to Mark who talked more about Channel 4’s particular needs for CI Performance testing. Mark mentioned that performance issues can sometimes be more challenging to fix that functional issues and that build failures and short feedback loops can stop these breakages making it into the codebase. He then discussed whether we would want pure CI performance testing, stating it may be disruptive at the start of the project, but on stable projects, it would provide early feedback on performance issues. He then talked about Channel 4’s particular choice of tools and they chose Jenkins for the build management, Jmeter for Load testing and webPageTest for front end instrumentation. Mark then went through how their particular system works.

Finally, Mark gave a brief description of what CI testing gives us which was very short feeback loops, the ability to fail builds based on multiple metrics and you would get performance trending data.

Another very interesting talk and possibly my favourite presentation of the day, very well presented and it was a topic which could be of use.

Time for the final tea-break before the closing keynote.

Keep Calm and Use TMMi by Clive Bates (Experimentus)

Clive started by introducing who Experimentus were and what they did, which was an IT services company who help optimise their clients approach to Software Quality Management. He then stated that testing needs to improve because software fails and we need to be efficient at stopping these failures. He talked about doing things the right way and getting rid of the barriers to quality work. The place to start was to recognise there is a desire to do better and gather evidence of the problem. Clive then mentioned the 7-point methodology used to help companies look at the process and make improvements called IMPROVE (Initiate, Measure, Prioritise & Plan, Define/Re-Define, Operate, Validate, Evolve). Clive then goes on to talk about Test Maturity Model Integration (TMMi). Which is a staged assessment model like CMMi, but focused at the testing process. There are 5 levels of the process from Level 1 – Initial to Level 5 – Optimising. From what Clive said, I understood that most companies would be working towards their Level 2/3 as these are the levels that require the most effort, levels 4 & 5 then offer icing on the cake so to speak (There is more to it than that but you get the idea).

Clive talked about why a client would use the TMMi model, stating it was the de-facto International Standard to measure test maturity and it is focused on moving organisations from defect detection to defect prevention.

He then talked about the features of an assessment and talked about the types of companies who had gone through the assessment so far. It was an interesting talk and was something different from the rest of the talks. I didn’t feel there was anything that I could personally take back from this, other than ask the powers-that-be about whether they were aware of this.

That was the final talk of a full enjoyable days presentations. I heard good things about the two workshops but unfortunately there weren’t any places left when I looked at going to them.

So things I will take away:

– Mutation Testing looks like a useful technique to try
– Actionable Alert Identification Technique is something which could help us cut the noise from Static Analysis
– Using SQL to map requirements to features/tests looks useful
– Belbin Team Roles Theory could be used to associate team members with roles
– Continuous Integration with Performance testing is a must!
– I would like to attend the Certified Agile Tester training. 🙂


Agile Attitude

Heard these phrases before?

“How hard can adopting Agile methodology be?”

“If I go on a ‘Certified Scrum-Master’ course, it will all be ok, right?”

It is very common for agile to be misunderstood by all levels of user.

For a new team, starting to implement agile methodologies can be a daunting job and the initial task is often to read a book and take it from there. I’m not saying there is anything wrong with reading a book, infact why not read 2, 3 or more? But in my experience, reading a book only helps the person that is reading it, that book will need to go around everyone in the team, including Project Managers, CEOs, Customers and everyone else involved in the project. Then comes the fun bit of figuring out that everyone has interpreted the book slightly differently and they ‘don’t agree with all the principles’, this should ring alarm bells but in a lot of cases doesn’t. If members of the team don’t agree with all the principles of Agile, then implementing it effectively will prove problematic. Once everyone has read and digested a book (or 2), the next stage should be to get everyone into a room for some interactive training, this can be a chance for an Agile Consultant/Coach to come in and show the team how it works or it could be a chance for the team to get together and discuss everything they have understood, either way the important thing is that EVERYONE buys into it. A Development team that buys in but a Development Manager that doesn’t can cause no end of issues with how the project will be run, or the more common case, the team buys in but the higher level management still want to see plans and schedules and a final release date. Of course, especially for a team new to agile, a firm schedule is very difficult to estimate when a teams velocity is unknown and a ‘firm’ schedule does slightly defeat the point of the term ‘Agile’.

A problem that is maybe a bigger concern is the team that ‘half-heartedly’ buys in to Agile, they send one person on a Certified Scrum Master course and then have all the sprint related meetings (Kick-Off, Planning, Daily Stand-Up etc) but do very little else, so no TDD, no Pair Programming, no backlog burn down. This probably suits the members who are ‘non-committal’ over the process as it means very little extra effort apart from a few meetings. The problem with this is that there will be no marked quality improvement in the deliverables and they won’t neccessarily be delivered any quicker. It is imperative that when implementing the agile processes, that it is disciplined and everyone is fully accepting of what is expected of them.

Agile should be implemented on strong foundations of understanding and willingness to change from top to bottom. By change, the most important one is a change of attitude over the priorities, working software instead of intricate design documents, willingness to respond to change over following a strict plan (not to quote the agile manifesto of course!). The team will need to be motivated and recognised for successes, so maybe a pizza delivery lunch once a sprint or maybe even a day out team building, these kind of gestures will get the team bonding and wanting to work together to produce the goods! The agile process should be a positive change, if it isn’t working, analysis of the root cause should be done and any issues resolved. There may be a chance that the change to agile isn’t right for the team/project and if that’s the case, then a process that brings the best out of the team should be identified. It isn’t for everyone, but those who use it and use it well have never looked back.

It should be understood that a difference in results will not happen overnight, to get the whole team on board and to produce reliable frequent deliverables of higher quality will take time and investment.

So be patient, I believe it’s been said before: “Good things come to those who wait!”

Testing Effectively in an Agile Project

This post will give an overview of how to make a testing team work effectively alongside a development team during an agile sprint.

The first step to ensuring the team works collaboratively is to make sure that all functionality that will be developed within the sprint, is well explained in frequent discussions and that everyone understands how each area will work. While some form of functional documentation for each feature is being distributed and reviewed by all parties, discussions should be happening with all team members about how this feature could be tested. They could possibly hold some brainstorming sessions to put together all their thoughts, having the development team involved in these discussions will not be a hinderance. Asking questions to the development team about any uncertainties that arise in these discussions is imperative, there may be areas that they haven’t thought of either, there may also be areas of deliberatly left to allow the development team a bit of creative freedom and to develop how they feel best suits.

Questions may be asked of the development team all through the sprint cycle and therefore, they should make themselves available as often as feasibly possible (obviously, they have to be left to develop the features as well).

Where possible, it will be ideal to automate as much testing as the team can manage, this will possible cause a bit of pain to set up in the first place but once this system is set up, this will then mean that future cycles will simply be a case of running the scripts that have been configured, rather than manually running every test. Writing an automated system may need to be planned out like a functional feature in the software. There are a few key points to take into account when automating a system:

Identify High Value Tests – Important to identify the test cases which will provide the largest return on their time investment. It shouldn’t be an aim to gain 100% automation coverage as it isn’t practical or cost-effective and would be almost impossible to achieve!

Automate What is Stable – Communicate with developers to make sure automated tests aren’t written for areas of code which are still in a volatile state, tests for stable areas will aid the teams effectiveness and avoid re-work.

Automated Tests Can be Run at Any Time – A large benefit of automated tests is that they can be scheduled for whenever they are needed.

Automation Helps Improve Software Quality – Automated tests generally run faster than a human tester, but it is important that the biggest benefit of automation will be seen in the next release of the product. To see the benefit of automation earlier, it is important to automate long repetitive tests to free up testers for other tasks.

Another method to aid productivity is Test Driven Development (TDD), I won’t go into the full details of this here but suffice to say, it is a practice where the developers write just enough code to pass a failing test. There are a series of simple steps to this:

-Developer works with the tester to write a test
-Developer writes code which make the test pass
-Developer refactors code to evolve it into better designed code

This process can benefit the testers for the following reasons:

Documentation and Working Examples of Code – by writing unit tests, the developers are providing the QA team with working samples of code. Meaning they can gain a stronger understanding of how the system works which will improve the quality of any functional tests then written by the testers.

Improves Code Quality – by having the unit tests written first, the quality of code will be stronger due to it being written to pass the tests. This will make the testers job easier later on in the cycle.

Team Works Together and Collaborates – there will be believers and non-believers or agile on all teams. To make this process work, it may be a good idea to pair up a believer and a non-believer to help spread the practice. If and when a team is all on board with TDD, the benefits for the product will soon become apparent.

If the team can work effectively and testers are able to set up an agile and automated environment, it will eventually lead to the testers being able to try some other methods to find defects such as exploratory testing!

Focused Stand-Ups – Constructive Daily Meetings

Once a team has completed all their planning, they will then start collaborating to complete user stories and meet the goal set. The daily scrum should be a short meeting where all members get together and should give responses to 3 simple questions:

  • What did you do since the last meeting?
  • What will you be doing today?
  • Anything blocking your progress?

It sounds simple, but it is quite common for these 15 minute meetings to last up to 40-45 minutes with conversations based on what team members may have said during their responses, this can become more difficult with distributed team members as they may see this as a time when they can have a chat with the other team members about their work. The best thing for them to do here is mention that they require a chat with some team members after the meeting.

The most important elements of these meetings are the three questions and it is important that everyones responses have value to the rest of the team. It is very easy for someone to come into the meeting and say:

Yesterday I did …, today I will do …. and I have no blockers!

The important fact to remember about the daily stand up is not purely a status meeting but is a chance for the team to meet and find ways to help keep the team making progress. Before the meeting, it is good that all members think about how their work impacts all other team members or identify who may be able to help them resolve issues.

A better update would be:

Yesterday, I started doing …. and while doing this I discovered this issue that will possibly affect A & B, today I will look into ways to get around this issue and may need to call on Bob to help me if I become blocked as I believe he has come across something similar before. This constraint has the potential to block me if I can’t get round it.

Priorities for the team will be changing all the time and part of the purpose of the daily meeting is for the team to sync up and re-evaluate the priorities which will come out from each individuals updates. The scrum master may then feel the need to think about shuffling the priorities and suggesting moving people around to help with tasks that are becoming issues or taking longer than needed.

Giving details of what you are intending to achieve in the time between the current meeting and the next is a way of the team members committing to the team and providing the chance to gain the team members trust by following through on their commitments when they meet the following day.

It is important that the team is progressing, there may be tasks where the estimates are expanding and discovered work is appearing, this would be a good chance for the the scrum master to question team members whos work is not helping the progress, there may be issues they have not felt comfortable divulging in the meeting, they may just be struggling and feel that they are letting the team down or they may be distracted by outside influences, this way the Scrum Master can then find ways to resolve any issues by putting extra people on the tasks or eliminating any distractions.

Working with distributed teams will bring additional issues that will need communicating, remote team members work will not be as visible to the scrum master and other local members, it is therefore paramount that the remote members are able to communicate in great detail about their work and any issues they may be having. All teams will have different ways of communicating remotely with team members but it is important that whichever communication method is used (whether it email, phone, IM or video-conferencing), it is of the highest priority that all members feel comfortable and part of the team, that way they will feel comfortable discussing their work and any blockers they may be having.

These daily meetings should be completed within 15 minutes for a team, it is therefore highly important that side conversations do not occur while the meeting is in progress, any additional conversations should be taken offline after the meeting.

These meetings are a sure fire way of keeping the team focussed and all aiming for the same goal if they are used effectively and all members use the platform to be open and honest about the tasks they are currently trying to complete. If they are used effectively, the team burn down will show that the team are making great progress.

Buddying Up to Aid Communication!

One of the major problems that teams trying to adopt agile tend to suffer from is trying to do too much too soon and expecting results instantly. For teams starting out on the agile journey, it is more important to get the basics right which is especially important for teams of a distributed nature.

The first important focus of a team should be do all they can to be communicating as effectively as possible. With co-located teams, this is generally a case of walking over to colleagues and making sure they are aware of what you are up to and whether there are any issues, these shouldn’t be left until the following daily stand-up but should be shared with the scrum master/sprint facilitator and other team members as soon as possible so that it can be resolved. There may just be a member on the team who has previously experienced this issue or atleast knows how to fix it.

With distributed teams, this can be more difficult, this is usually left down to the scrum master to keep in contact with the distibuted team members , obviously this is added responsibility for the scrum master to deal with (although as faclilitator, this is probably part of the job spec).  Any impediments tend to not get raised until the stand up meetings or until the team in one location have done all they can to try and resolve it without getting the other part of the team involved.

So it can be quite clear that the teams in different locations can be good at collaborating with themselves but when collaborating between all sites, this proves to be more difficult.

Maybe one suggestion to help aid this difficulty would be to ‘buddy up’  team members from different sites so that all members of  a distributed team that are on different sites have someone who they can be in constant discussion with.  This could maybe start by just chatting with each other about ‘non-project’ matter, so strike up general conversation to try and construct a relationship based on more than the fact that you happen to be working on the same project. This then means that both parties will hopefully feel more comfortable talking about project issues with each other. Hence meaning that without forcing the issue, issues will be raised informally faster and what is happening on each site will be more visible to all team members. It means that when conversations occur in meetings that are very one-sided,  they can then be discussed out of the meeting between the ‘buddies’ to make sure that everyone has the same understanding. It could also make members of the team more aware of the distributed members presence in meetings that can sometimes end up in a one sided conversations, meaning that they will be more inclined  to include the distributed members in the discussion rather than just the team members present in the room.

This doesn’t necessarily have to be chats over the phone, these could be conversations over IM or email meaning that it doesn’t have to affect work flow, but just having a nominated person to chat to and not have all communications going through the scrum master, may make for a team which is more aware of everything that is going on around them and they will also be aware if someone is struggling with their tasks and may not have felt comfortable announcing this in the daily meetings.

I feel this would improve the relationships between all team members and also see a rise in productivity.

This may not obviously work for all teams, but I feel it would benefit those teams who may be struggling with problems taking a while to be raised and communications between sites not being the best.


Pre-Planning for a Sprint Planning Meeting

I’m not claiming to be an expert here but when kicking off a sprint, it is important that everyone in the team is aware of what is needed to be ‘successful’ in the sprint. When going into the actual planning meeting, the sprint team needs to have:

  • A fully prioritised backlog
  • User-stories which fit into sprints
  • Acceptance criteria for all user-stories

The idea of the pre-planning is to make sure the team is as informed as possible for starting the sprint planning. To make sure this happens, there are a series of activities which teams can perform alongside the product owner and the stakeholders.

The first of these is to gain clarity on the potential user stories that are being considered for the sprint. The team, together with the product owner will discuss each user story to make sure they understand the scope of the work involved. If this cannot be defined between the team and the product owner, then the stakeholders that requested the work should be called to clear up exactly what they need from the story. Once this has been defined and the team understand what is needed, the team should then scope this story out. One thing that is important at this point is to consider whether the story has the same meaning for all members of the team as there may be a chance that cutural differences between team members may mean that definitions could be slightly different. This all then leads to the team understanding what they have to deliver at the end of the sprint for this particular story and also what acceptance test criteria would be deemed necessary to mark this story as ‘done’. Clarifying the acceptance criteria also highlights whether there are any specialised skills needed to complete this item, this may affect the priority of the story in the product backlog.

Once these stories have been defined,  the next task would be to break them down, for a team that knows it’s velocity, they should be able to break them down based on that.  This will speed up the actual planning meeting process as many teams would go into this meeting with items too large for their sprint and spend most of the time breaking the items down rather than actually planning the sprint. The key thing here is that if they break these stories down before the planning meeting, they will know earlier exactly how much work they need to do. During the pre-planning, it is important for team members to identify any user stories which they feel are too big for a sprint and then spend some time breaking them down. It is important at this stage not to try to make the stories fit into a sprint, if they don’t fit without manipulation then they are too big and should be split into separate user stories, there will be a risk when trying to squeeze them in that tasks will be forgotten and then there will be additional work to add into future stories.

It is important that when clarifying user stories and breaking them down, the team works to eliminate as many dependencies between stories as physically possible in the product backlog. A good user story is independant of all others stories, obviously there are occasions where this is not 100% possible but doing all that can be done to minimise the dependency is important.

The pre-planning phase is also a good platform for all members of the team to discuss whether the current prioritised backlog actually still works for the release of the product, it is important that any stories that are no longer relevant are either de-prioritised or removed completely. It is paramount that the backlog does not just become a dumping ground for any possible features that may be wanted at some point in the future.

Holding these pre-planning meetings should not become a distraction to the work of the sprint, it should be timeboxed to a manageable amount of time and it should possibly be scoped into the previous sprint. It is important that the team is always looking forward and is aware of what is coming in future sprints so having it planned that in the last week of the sprint,  you will have a meeting to flesh out the backlog for the next sprint, means the team will be able to focus towards that.

All this should make the actual sprint planning meeting run more smoothly and everyone in the team should be more aware of what is going to be in the sprint and how they can complete these stories and pass the user acceptance criteria.

Lessons Learned from Agile Cambridge 2010 Day 2

Following another 5.45 start, I managed to arrive at the venue for around 8.30am for the second day of the conference. I spent the first half hour or so before the second keynote speech talking to some of the guys from the Cambridge Crystallographic Data Centre who were attending the conference in their numbers.

Building Trust in Agile Teams – Rachel Davies

Having attended the workshop that Rachel ran the previous day, I was looking forward to this session. It was quite an open session with lots of interaction with the audience. Rachel talked about how trust is the foundation of teamwork and as good Agile process depends on teamwork, it is obviously an important factor. A statement that sticks in my head from this session was that lack of trust is like a tax on team interaction, showing that it will slow the team down and the costs will go up.

Rachel borrowed ÂŁ20 from an audience member and also got a team to agree to do the ‘trust fall’ to portray how trust has to be gained by all involved to allow progress, especially with the trust fall, the team had to trust each other and also the ‘fallee’ has to trust that the team will catch them. Rachel then discussed different techniques to gain trust from a team both from a Scrum Master type role and from a team member, suggestions such as getting to know the rest of the team and to support the team by creating transparency for the team. The key lesson to take from this session was that building trust takes time and it will take a lot longer to regain it if it is lost.

I have to admit, I always thought that trust for teams was almost a given so this session definitely opened my eyes to the point that not everyone is necessarily as trusting as I am. I certainly feel that Rachels book ‘Agile Coaching’ will be worth buying at some point in the near future.

User Story Mapping & Dimensional Planning – Willem van den Ende & Marc Evers

After the half hour break and another cup of tea, I attended this workshop which was extremely well attended to the point that I turned up at the time the session started and the room was already full. The session kicked off with the guys descriping a better way to break down high level requirements into manageable user stories. This was done by first breaking down the system into the different users of the system, so their example was an online auction site, there would be the seller, the buyer and the site admin. Under each of these users would be a goal for the system, then under these would be organised activities that would be required to meet the user goal. Once these had been established, individual tasks to make up these features can be created and sub-tasks (tools) can be defined from then. As shown in the diagram below (courtesy of Jeff Patton and Karl Scotland)

In our team, we discussed setting up a similar mapping for developing a smart energy monitor for the home, which in itself is an interesting product, but was perhaps a bit too complex for this particular exercise as we kept coming up with features we felt were important. We did however narrow our options down to just the home user and the electricity supplier, we were then able to put a story board together.

Willem and Marc then discussed the next stage which was the dimensional planning, this is also shown in the diagram above and relates to selecting a number of releases from the tasks and sub-tasks that have been defined in the story board. They can be organised in a way that the minimum release can be defined and from that, subsequent releases can then also be planned. This is all done before any estimates have been considered. So the minimum release can then be planned and a timeline can be defined.

The guys also described a very interesting set of analogies to describe the releases. They used the terms ‘dirt-track road’, ‘cobbled road’ and ‘asphalt road’ which described the different qualities of the release. Dirt-track road would be a minimum quality release that will get the user from A to B. Cobbled road would be the quality level that most users would be happy with, it would do all that was needed but wouldn’t have any ‘bells’ and ‘whistles’. The asphalt road would be the ultimate version, a highly polished release.

All in all, this was a very insightful session where I felt I learned a lot on how to fathom out high level requirements in a very simple and straight forward way.

The Challenges of Measuring ‘Agility’ – Simon Cromarty and Simon Tutin (GE)

I had sat with these guys at the pub the night before so was intrigued as to what they would be discussing in this session. They had both been tight lipped the previous night on what exactly they would be talking about. They kicked it off by showing the journey that GE had come along as an agile company to get to the level they were at now. It showed that to get to a relatively stable agile ability, it does take several years.

Simon C then split us into groups and asked as to discuss what we thought ‘Agile’ meant, this was actually quite a thought provoking question as it was amazing how many differing opinions there were within the groups, obviously there were all the generic answers such as ‘adaptable teams’, ‘fail fast’, ‘short development iterations’ and ‘collaborative environments’ amongst others. There was quite a long list on the board by the time the Simons had gond around all the groups and collated them. Then Simon C asked what we thought made an agile team successful, this again provoked a lot of discussion within the groups and from what I remember our team came up with ideas such as ‘delivering on time/early’, ‘all the team being enthusiastic about the process and working together’  and ‘if the project is going to fail, the signs of a successful team is that they fail early’.

Simon C then discussed an assessment that is done with all the agile teams at GE, which is a 200 question paper where the team discusses together and ranks each answer between -3 and 4. -3 meaning that this is a systemic or oganisation impediment that the team can’t sort on their own, 3 meaning that they ‘always achieve this’ and 4 meaning ‘we have a better way’. This was a clever way of assessing the teams but also giving them the motivation to improve what needs to be improved and therefore forever evolving as an agile team. Ienjoyed this session and felt this assessment was definitely a way of showing how ‘agile’ a team could be.

In the break before the final session, I found myself stuck between purchasing 2 or 3 different books from the book stand, but in the end stumping for a book by Nat Pryce and Steve Freeman (Little would I know they were on the panel in the final session) called ‘Growing Object-Oriented Software, Guided by Test. A book all about Test-Driven Development which was one of the big themes from this conference.

Creating A Development Process For Your Team: What, How and Why – Giovanni Asproni, Nat Pryce, Steve Freeman, Rachel Davies, Allan Kelly & Willem van den Ende

This was a panel session where questions were fired at them from the audience, questions started with ‘How do you pursuade a company to fully buy into Agile’ which caused discussion around finding the right reasons for moving to agile and to move over slowly. The panel also stressed that Scrum is not really enough for a company on its own, they also need to use XP practices such as Pair Programming and TDD. There was then quite a lot of questions about Pair Programming and also people asking as to why Scrum is not necessarily enough. The panel described that scrum is just a project facilitation tool and that for the team to be completely agile, other methods must also be used.

It was an interesting session and I can’t remember even half the questions that were asked but I remember feeling like I had learned a lot from that particular session.

Mark Dalgarno then brought the conference to a close, there was then a quick flurry of goodbyes and the invevitable exchange of business cards between people and everyone then set of on their journey home.

It was a fantastic conference, it was great to meet like-minded professionals and exchange stories of our experience with the agile processes.

I must say thanks to Mark Dalgarno and team for putting on such a great conference and to Redgate for sponsoring the conference, being out in force to make us all feel slightly envious of their working environment and for providing lots of chocolate for the 2 days! Thanks to all the speakers and for everyone who I spoke to for being so welcoming and interesting!!

I hope I will be able to attend in 2011 and maybe even bring some colleagues along with me. 🙂