Not Familiar with Testers? – Proving Your Worth With a Development Team

Last June, I had the opportunity of a new challenge within my current company, which I grasped with both hands. I moved from a team which had a very well-oiled engineering process, a very stable test framework which gave the team confidence in their product and a team which I had worked in since I graduated from University 7 years earlier. The team I moved to had no active Testers and was still trying to define their engineering processes.

This has proved to be a challenging but enjoyable change and has really made me work hard to show what I can bring to the table and show why testing/QA teams are important.

One of the first actions when I joined the team was to ask to be added to code/peer reviews, previously the code had been reviewed between developers only. This brought some resistance initially –

  • “Why do you need to be on Code Reviews, you’re only QA”
  • “What benefit will it bring having you on the review?”
  • “It will take longer”

I went into more detail on this in a previous post, but the point was that they were not keen to start with and then the next stage was to bombard me with so many code reviews that I had very little time to do anything else. I stuck with it and eventually got through them.

There was initially a reluctance to involve QA, but to be fair to the team of developers, they were open to try once we started to discuss things with them.

Over the next 6 months or so, as a team we worked hard to prove ourselves and we are now at a point where QA are considered in design discussions, code reviews and any major decision making. It’s been a challenge but we are now showing signs of working as one team. There’s still a way to go but we are happy with the progress. But how did we get to this stage? I put it down to 3 things:

1. Getting the Right People – The team being put together has to have a solid set of skills across the team, a good mix of traditional testing skills and good technical developers to work on the frameworks. The testing mindsets need to be there and all of the team need to be strong enough to question things and follow through when something needs doing. It helps in the scenario of the developers being reluctant to work with testers, that the test team have the right people skills to get to know them socially or atleast be prepared to talk to the team about non-project/work topics to build up enough of a relationship that it becomes easy to discuss work topics with them.

2. Find Ways to Be Involved  – Asking questions, listening to conversations, being willing to take tasks which will involve working alongside a developer, are all things which will help aid understanding of the functionality. Know your stuff, if it needs looking up, spend some time reading around the subject so that you can have discussions with the developers about it.  Ultimately, it is about doing all you can so that the developers trust that you know what you’re doing and you will test the product effectively and verify the quality. Set up bug scrubs, or design discussions and invite development along, it’s things like this which will prove that you are all fighting for the same cause.

3. Find Issues Through Testing  –  It might sound obvious, but if the team are previously used to relying on Unit testing and their own dev testing, then the QA testing needs to enhance the coverage and find issues that their testing wouldn’t find. Whatever way the testing needs to be done, put together a framework which will enable the team to spend their time testing, rather than constantly having to fix issues and not be sure whether the issues found are due to the framework or the product under test. Then the next stage in proving worth, is to find issues which may not have otherwise be found, issues which would have caused major problems if released.

Having these 3 things, will give you a fighting chance of a testing team which will work well with development. Maybe we’ve been lucky with the people in our team, but the difference in the last 6-9 months in the attitude towards the testing team has been huge, but here’s hoping it will continue to improve.

TestBash 2015 – More than Just a Conference

I have to admit, I was really excited about attending TestBash in Brighton, it was my first conference for 18 months and there was a real buzz about this one on social media. The schedule looked really interesting and there was a 5 of us attending from work so it was kind of like a team outing.

The journey down to Brighton on the Thursday didn’t go without a hitch, the train from Victoria to Brighton got caught behind a broken down train which meant we got in 30 mins later than planned. By the time we had checked into the hotel and found somewhere to eat, it was too late to attend the Pre-Conference Social, which personally I was gutted as I had been speaking to several other testers on twitter in the weeks before hand and was looking forward to meeting them, the rest of the team seemed quite glad to be going back to the hotel to get some sleep and I can’t really blame them for that.

The following morning, myself and Jesus from our team got up at 6am and joined the Pre-Conference Run, we completed the 5km run along the promenade and were back at the hotel having breakfast by 7.30.

We arrived at the Brighton Dome and from the moment we walked it, TestBash had a different feel about it to other conferences I had attended, whether it was the ninja stickers we wore with our names on rather than the formal name badges at other events, or the Ministry of Testing t-shirts, but there was a real feel of community.

Here is the Intel Security team who attended the conference

There were lots of great talks during the day with lots of interesting concepts:

  • It was interesting to discover the difference between the testing and release process of IOS and Android apps.
  • I was fascinated by Martin Hynie’s story of how changing the name of the Test team to Tech Research then to Business Analysts then back to Test caused the company to treat the same group of individuals differently and really show the power of Job titles.
  • Vernon Richards gave an amusing look into some of the phrases that are thrown around about testing such as “Anyone can test” or questioning why testing didn’t find the one bug that caused problems in production. He also gave an example of how to deal with a product manager who wants a number for how long testing will take and doesn’t get the answer he wants.
  • Maaret Pyhajarvi’s session really showed that Quality isn’t the responsibility of just the testers, in fact Maaret went as far to say that Quality is built by the developers, testers just inform of the quality. This came from her account of working as a solitary tester on a team of developers and seeing that initially the quality went down with addition of a tester as the developers became less vigilant with their testing before handing it over, as they expected Maaret to pick up all the testing. She showed us how she managed to get them on board and as a team improve the quality.
  • Iain McCowatt discussed how some people have the intuition and tacit knowledge to see bugs whereas others have to work methodically to find bugs, he then went into ways to harness the diversity amongst a test team.
  • The concept that stuck with Matthew Heusser’s talk on getting rid of release testing was the fact that changing the process shouldn’t be done all at once and the best way is to try one or two new stages first and make gradual changes to the process. (I also liked the fact that he worked on his slides on his tablet as he presented)
  • Karen Johnson gave a very thought provoking talk on how to ask questions, this really resonated with me and I can certainly see ways to get more out of people when I’m asking questions

There were 3 talks which really stood out for me.

The Rapid Software Testing Guide to What You Meant To Say – Michael Bolton

I had interacted with Michael a few years previously when he had given me some constructive criticism on one of my earlier blog posts on this site, so I was intrigued to see what he was presenting. This was a very interesting talk, Michael is a very engaging speaker and it’s clear why he is one of the most respected members of the testing community.

The concept of this session was to remind us of some of the phrases which are commonly used by testers which can cause misunderstanding or misconception. He showed some examples where he exchanged words like testing for “all of development” in phrases such as “Why is testing taking so long?” and “Can’t we just automate testing”. Suggesting that people may use testing as a scapegoat in this particular example, when infact the whole process should take the blame.

Michael went on to talk about how safety language should be used, phrases such as “…yet” and “So far” and not making statements such as “It works” and instead stating “So far, the areas which I have tested appear to meet some requirements” or something like that….

The discussion of testing vs checking came up (which was part of the issue Michael had with my earlier blog post…. I’ve since done the necessary reading to know the difference) and showing how checking fits into the testing process.

Overall, I think I learnt that it can sometimes be very easy to make statements which may raise expectations more than they should be or give the wrong message completely. I will certainly be ensuring to use safety language more often in the future.

I also feel it would be really useful to go on the Rapid Software Testing course. Something to look into this year.

Why I Lost My Job as a Test Manager and What I Learnt as A Result – Stephen Janaway

I hadn’t heard Stephen present before and he came across very well. His talk covered how when he was working as a Test Manager, with the agile process, he was managing individuals in several different teams, while there was a development manager with each team. He talked of the difficulties in decision making and how the products were slow in being developed/released.

Stephen then described what happened next, test managers and Development managers being removed from their roles, a delivery manager being put in each team and how the process improved. The question then was what happened with the Test Manager? Stephen explained the roles that he was now involved in, such as coaching management on testing, and how to manage testers, setting up testing communities internally so that the testers still have like minded people to discuss testing issues with now that they haven’t got a test manager and generally being an advocate for testing/quality within the organisation.

It showed that Test Managers needed to be adaptable and make decisions to go along a slightly different path and this is the way the testing industry seems to be going, so it was interesting and reassuring that there are other options out there.

The other point that hit me during this presentation was that of the internal testing communities, we have lots of individual test teams working on different projects, all developing their own automation frameworks and using different tools, it would be good to bring everyone together to share ideas, and maybe get some external speakers of the testing community in, to inspire them.

I really enjoyed Stephen’s talk and it gave me plenty of food for thought about the future.

Automation in Testing – Richard Bradshaw

Richard’s talk resonated with me for one reason, he explained how in his early years he had been seen at the automation guy and would try and automate everything, then he realised that too much had been automated and a benefit was no longer being seen. I have seen for myself, how teams have been so focused on having all of their testing automated, that they actually spend more time fixing failing tests when the next build is completed that they do writing any other tests. I whole-heartedly agreed with Richard when he stated that automation should be used to assist with manual testing, (writing scripts for certain actions to speed up the process) rather than relying on automated tests for everything.

This does seem to be a hot topic for discussion as there is the question of automated regression tests/checks, and automated non-functional testing, how should these be approached? This presentation definitely gave me a lot to think about with how to improve how we use automation when getting back into the office.

Richard presented this really well and I would say it was my favourite talk of the day.

I have to say that the schedule from start to finish was enjoyable, the lunch was delicious, the organisation of the day was fantastic. I honestly can’t wait to go back next year.

I really felt like the testing community is a really great place to be, so many great people, great minds with interesting ideas and a great chance to improve yourself by being able to attend conferences like this.

I am really glad that I found http://softwaretestingclub.com and was able to find details of the conference on there. The next step for me is to help set up the internal testing community and get them to look at it too. Maybe we can have a bigger Intel Security contingent next year, maybe I will find something to present. :)

Overcoming the Resistance – QA Involvement in Peer Reviews

Maybe I was quite naive about peer reviews but my experience previously was that it was a natural process to have QA/testers involved in code reviews alongside developers. Whenever a new feature or a bug fix was implemented, before the code was checked in, the developer would set up a walk through with another developer and a member of the QA team. The team I was part of was quite a mature engineering team with a defined coding standard which was ingrained into the team. Never was a second thought given to the fact that development would commit code without showing it and talking through it with QA. This would provoke discussions around how to test the features and whether the development written unit tests had enough code coverage to QA’s satisfaction. Moving to a different team, I have seen that this isn’t necessarily part of the process, peer reviews happen between devlopers, it has never been considered to involve QA.

I have read a lot about this subject and actually, the level of maturity around code reviews of this first team is relatively rare in the industry. So, why is it so rare? I guess it depends on perspective and the level of testing being done:

  • If ‘black box’ testing is the main form of testing, then I guess not knowing about the code is the ‘right’ thing to do?
  • If ‘white box’ testing is used, then knowing the functionality you are testing is paramount, other than functional specifications, the best way of seeing how the area being tested works is to review the code.

All testing I have ever done has been a mixture of both of these, there has obviously been a level of testing which I can pull straight out the user guide/functional specification and others where I have needed to know intricate details of the code to be able to ensure I am testing all feasible code paths.

I have always found it beneficial to be involved in peer reviews, even if I don’t say anything during the review, but just soak in how the code works and write down ideas for testing. But usually, I will ask questions such as “what if i entered this here?”, “what if i did this?”, using the meeting to also force the developer to think about their implementation, rather than just plodding through code line-by-line.

So why is it so alien to some teams to involve QA? Here are some of my thoughts on some common phrases used:

  • QA don’t have the skills to review code – Not every QA resource will know the syntax of the particular language, but does that mean by sitting with developers and understanding how the code works, they won’t find issues or raise questions which provoke the developer to improve their code?
  • Having QA involved will delay build/release – Only if you treat QA as a separate entity to development and do separate peer reviews. If they are involved in same peer review, then it shouldn’t make much different. We are here to prove the quality of the work, not be a hindrance.
  • It’s only a one line change, why would QA need to review that? – On one hand, yes it is only one line but the context of that one line could have an impact on some existing testing, or just having QA aware of the change could be useful.

I’m not for a second suggesting that development teams may be dead against the idea, but I think as we move to an increasingly ‘agile’ world, separating Development and QA out at this level needs to change, we should be promoting a ‘One Team’ approach where value is provided by everyone involved. QA can bring value to code reviews. Quality should be built in at the start and the earlier this can be proved, the better it is going forward. It needs to be clear that performing a critique of the code, is not performing a critique of the developer.

Some quick wins may be needed to win the development team over:

  • Read up on the functionality before you attend the review, so you have a basic understanding of how it was described to be developed
  • Ask logical questions
  • Discuss options for testing
  • Flip it and have development review tests (share both activities)

Baby steps are needed for progress, find a way to get involved for some small tasks to start with and build up trust with the developers that you are not just there to be a pain in the backside but actually, if you work side by side with the development team, it will improve the product delivered.

Any thoughts on this topic would be most appreciated.

Lessons Learned from BCS SIGiST Summer 2013 Conference

Thought I could kill two ‘proverbial’ birds with one ‘proverbial’ stone here and document the latest conference I attended while also writing my first post on here in 2 years!

It was an early start, the team of us that were going met at the rail station and got the train into London Marylebone. A short walk to the venue and we were there ready to start. A quick cup of tea and a biccy, then into the lecture theatre for the opening keynote:

The Divine Comedy of Software Engineering by Farid Tejani (Ignitr Consulting)

Farid confidently presented a talk which outlined the changes that many industries have faced over the past decade, industries which have been ‘disrupted’ by the digital world and stating that “Instability is the new Stability“. Examples of Blockbuster Videos and AOL ‘Free Internet’ CDs were used as examples for how industries have been disrupted. He then said that Software Engineering could be the next industry to be ‘disrupted’ by the digital revolution. If I understand what Farid meant here, it was that unless we adapt and change with our surroundings, companies will be left behind as their function could become obsolete (much like Blockbuster Videos).

He then turned to Software Testing and discussed how we were going wrong as an industry, suggesting some common misconceptions about testing which need to be corrected such as the following:

Testing is a risk mitigation exercise
Testing is an efficient way of finding defects
Testing adds quality to software

He listed 9 of these in total and explained why he felt these were myths. Farid then covered the ever-contentious issue of using agile testing over waterfall/v-model and how he thought that all companies need to move to a more ‘agile’ model to avoid being disrupted in the future. Personally, I can see what Farid is suggesting here, however there will always be companies and industries which will continue to use non-agile methodologies purely because they have to or they are mandated to (such as Aviation industry). That doesn’t mean they can’t evolve the way they work within their confines though, there needs to be a better feedback loop at each stage of a project/product/programme.

Overall, it was an interesting talk and the obvious message that came out of this was to ensure that as Testing professionals, we keep ahead of the curve so to speak and ensure we are aware of changes that may affect our ‘Multi-Billion $ Industry

Then time for a tea break and another biscuit or two (maybe there were muffins at this point too :-))

Expanding our Testing Horizons by Mark Micallef (University of Malta)

Mark Micallef has experience in the industry having managed the BBC news testing team amongst other roles and has now moved to Academia to lecture on and research Software Testing. He brought his knowledge to SIGiST to talk about some new ideas/techniques.

Mark started the talk by saying that having attended many conferences, he has found that the industry seems to be continuously talking about the same topics – Agile, Automation and Test Management amongst others. He felt that as industry professionals, we should be striving to identify new and better ways to test and find ‘The Next Big Thing’ which was Web Automation back in 2007. It was suggested that this could happen by the industry working closer with Academics. There may be obvious differences with the way the two industries work but there would be benefit for both sides. Academics tend to be all about the understanding, they need to 100% know exactly what they intend to do. They will also look for perfect, complete and clean solutions. The differences to the Software Industry are that professionals will tend be pragmatic and develop a solution that works and ‘adds value’ rather than being perfect and complete. This obviously doesn’t mean that one is wrong, just they have different ways to work.

He then suggested 3 techniques which could be used to improve testing.

The first insight was “Catching Problems Early, Makes them Easier to Fix”

This was related to Static analysis tools but took it one stage further and suggested that most Static analysis tools will give too many alerts which may lead to cognitive overload and eventually this will lead to the tool or technique being abandoned. This brought us to the suggested concept of ‘Actionable Alert Identification Technique (AAIT)‘. This is the idea that you apply some criteria to the output of alerts such as areas of code you wish to focus on, certain junior developers code you wish to look at or just code submitted in the last week and then by applying this criteria, you will be given a ranking of the alerts in a prioritised order. This will obviously depend on some form of automation set up to perform this analysis but the outcome would be worthwhile.

The second insight was “We Should be Aware of the Effectiveness of Our Test Suites”

Mark started off suggesting that Code Coverage tools should be used to work out test coverage, but that it may be possible (and he showed this using simple examples) that you can achieve 100% line, decision and function coverage without actually testing anything useful, the example Mark showed here had a function meant to multiply two numbers together, the assert had that the answer if you passed 5 & 1 in, would be 5! Simple enough, but if you change the multiply sign (*) for divide (/), the answer if you pass these two numbers in (in the same order) would still be 5. Therefore effectively rendering this test as useless!! The suggested concept here was the idea of ‘Mutation Testing‘. Mutation testing is where you create multiple ‘mutants’ of your product by modifying code then running these ‘mutants’ through your tests, it is then a bad thing if all your tests pass as if nothing has changed. Ideally, even a small change in code should cause at least one test to fail. Once they have failed, you can identify the changes needed to improve the test cases.

This did then highlight some problems with Mutation Testing such as it is expensive to generate mutants and execute the tests. This sounded like a very interesting technique despite the problems and will certainly be something I intend to investigate further.

The third and final insight was “Testing will Never Provide Guarantees that Bugs do not Exist”

Mark talked about how currently testing is seen as a way of following every possible path in the code to improve the coverage. Mark suggested that an additional concept of ‘Runtime Testing, could help. This is where the test is exactly what the user may be doing right now, so this would work well for web pages or apps/programs where there is a lot of user interaction. This will require mathmatically working out the exact paths users may take when using the software/web site and this will not necessarily come naturally to everyone. There may also be performance overheads if you are trying to test and measure what the user is currently doing.

Mark then went on to recommend how working with research and Academia can have huge benefits and that we should look into some of the whitepapers available on different topics to give ideas.

This was a great presentation which did give some new ideas and new ways of identifying ideas by using research by academics.

Requirements Testing: Turning Compliance into Commercial Advantage by Mike Bartley (Test and Verification Solutions)

Following straight on from Mark’s presentation was this one on Requirements Testing. Mike started off by using the example of buying a laptop and how you have a set of things you want when you go to purchase a new one, then when you buy a new laptop, are you able to map the features of the laptop you buy back to the requirements you originally had? It was an interesting point, sometimes people have a pre-concieved idea of what they want and don’t always get exactly what they want.

Mike then went on to software requirements and explained that they are a lot more complex. He then said that poor and changing requirements have been the main cause of project failure for years. Which highlights the need to capture requirements and store them. Mike then asked the question “How do we make sure requirements are implemented and tested?” The obvious answer here would be to ensure that they are tracked and measured as the project carries on. Mike took this one step further and talked about mapping requirements to features and features to implementation and finally to tests. Setting up a uni-directional flow chart of these mappings would highlight ‘Test Orphans’ (tests which don’t map to any requirements, features or implementations) and ‘Test Holes’ (gaps where there are no tests mapping to requirements, features or implementations). This highlighted the importance of knowing the exact purpose of every test and I’m sure most projects which have gone through several releases will have orphaned tests that will not be deleted purely for sentimental reasons. These orphans obviously waste time and effort and of course the test holes will highlight a risk as a requirement will be missing a test.

Mike then went on to talk about Requirements Management and how there is a reasonable amount of tools which provide the ability to map requirements to features, designs, units and even to code but not necessarily to tests or at least not with the ability to show test status or results.

The presentation then went down the track of showing how SQL can be used to create bi-directional requirements mapping meaning we can relate test status back to requirements themselves. This sounds like a useful idea which can bring a good Return on Investment, although there would be an initial cost to build the database, and to add all requirements and test information in to the DB. But the potential for the business advantage is massive, all holes and orphans could be identified quickly along with the analysis of the risk and impact.

The potential of this idea is huge and again is something that will require some additional investigation.

It was then time for lunch and have to say the food was delicious. Moroccan Lamb with wedges and salad, followed by Eton Mess! We did a little bit of networking and then decided for half an hour, we would go and wander around Regents Park and get some fresh air.

Improving Quality Through Building a More Effective Scrum Team by Pete George (Pelican Associates)

With my interest and knowledge in the Scrum area, this talk was probably the one I most looked forward to before the conference. Pete George clearly knows his stuff and was able to keep everyone interested throughout his talk. He started off by talking about the “Marshmallow Challenge” (http://marshmallowchallenge.com/Instructions.html) and how it is a useful challenge for teams to try. He then showed a graph about how different industries had fared in the challenge, obviously engineers and architects came out on top as they were able to build the highest structure but the next highest was ‘Kinder Garten children’, this showed more about the techniques used by the other teams, which would generally be a case of spending 90% of the time building a structure out of spaghetti, then putting the marshmallow on the top with about a minute to go and watching it fall, then quickly putting a structure together in the last 30 seconds that holds the marshmallow. The difference with the kinder garten kids was that they would use a different strategy completely and just build a stable structure with the marshmallow, then continue to enhance (effectively doing agile without realising it!). This was an interesting exercise which may be worth trying with our team.

Pete then gave a brief succinct overview of the scrum process using the 4 Artifacts – 3 Roles – 4 Ceremonies description. He then talked about the fact that with the continuous inspect and adapt approach, there was quality control built into the scrum framework. But the fact that Pete addressed next was the crucial point, the team is only as good as the people in that team. Companies spend all the money they have on changing the process the teams work in but if the teams aren’t working well together then projects will still fail. The problems will be due to the fact that some people are set in their ways and refuse to change, these may be crucial people technically, but when it comes to teams, they don’t seem to fit. This is where it comes down to things such as team building (socialising as a team), but also looking at the Belbin Team Roles theory. The idea being that any good team will cover off all 9 roles shown below:

Belbin Team Roles

There are some interesting roles here, in my head, I’m already trying to identify who is which role within my own team.

It was a useful talk, obviously Pete had to aim the talk at people who didn’t know a lot about agile/scrum as well as ones who did and I thought he did a great job at covering both camps.

Mission Impossible: Effective Performance Evaluation as Part of a CI Approach by Mark Smith (Channel 4) and Andy Still (Intechnica)

This talk covered how Channel 4 took an approach to include performance testing within their Continuous Integration setup to get constant feedback on their performance. Andy Still from Intechnica started off the presentation talking about the CI approach. He then mentioned that the first thing to do when considering Performance testing for Continuous Integration is to ensure that Performance is considered as a first class citizen, it should be treated as important as any functional requirement. Performance requirements should be defined and documented alongside these functional requirements at the start of the project. Andy then said that it was important that the tests should have realistic pass/fail relevant to the stage of the project and the tests should be able to run without any human interaction/interpretation. The next point was an interesting point, Andy mentioned that Performance is a Linear problem not a binary one, meaning that the check in that broke the build, may not be the one which caused the failure, it may be that the last check-in ‘tipped the scale’ but the previous one may have pushed the scales close to the limit from being nowhere near the limit. Andy then started a small debate by asking whether the real challenges to successful implementation of performance tests within CI were down to process or tools. He presented both sides of the argument well and left us to decide. Personally, I was split as I can see that both Process and Tools provide issues that could prove large challenges for teams trying to set this up.

Andy then handed over to Mark who talked more about Channel 4’s particular needs for CI Performance testing. Mark mentioned that performance issues can sometimes be more challenging to fix that functional issues and that build failures and short feedback loops can stop these breakages making it into the codebase. He then discussed whether we would want pure CI performance testing, stating it may be disruptive at the start of the project, but on stable projects, it would provide early feedback on performance issues. He then talked about Channel 4’s particular choice of tools and they chose Jenkins for the build management, Jmeter for Load testing and webPageTest for front end instrumentation. Mark then went through how their particular system works.

Finally, Mark gave a brief description of what CI testing gives us which was very short feeback loops, the ability to fail builds based on multiple metrics and you would get performance trending data.

Another very interesting talk and possibly my favourite presentation of the day, very well presented and it was a topic which could be of use.

Time for the final tea-break before the closing keynote.

Keep Calm and Use TMMi by Clive Bates (Experimentus)

Clive started by introducing who Experimentus were and what they did, which was an IT services company who help optimise their clients approach to Software Quality Management. He then stated that testing needs to improve because software fails and we need to be efficient at stopping these failures. He talked about doing things the right way and getting rid of the barriers to quality work. The place to start was to recognise there is a desire to do better and gather evidence of the problem. Clive then mentioned the 7-point methodology used to help companies look at the process and make improvements called IMPROVE (Initiate, Measure, Prioritise & Plan, Define/Re-Define, Operate, Validate, Evolve). Clive then goes on to talk about Test Maturity Model Integration (TMMi). Which is a staged assessment model like CMMi, but focused at the testing process. There are 5 levels of the process from Level 1 – Initial to Level 5 – Optimising. From what Clive said, I understood that most companies would be working towards their Level 2/3 as these are the levels that require the most effort, levels 4 & 5 then offer icing on the cake so to speak (There is more to it than that but you get the idea).

Clive talked about why a client would use the TMMi model, stating it was the de-facto International Standard to measure test maturity and it is focused on moving organisations from defect detection to defect prevention.

He then talked about the features of an assessment and talked about the types of companies who had gone through the assessment so far. It was an interesting talk and was something different from the rest of the talks. I didn’t feel there was anything that I could personally take back from this, other than ask the powers-that-be about whether they were aware of this.

That was the final talk of a full enjoyable days presentations. I heard good things about the two workshops but unfortunately there weren’t any places left when I looked at going to them.

So things I will take away:

– Mutation Testing looks like a useful technique to try
– Actionable Alert Identification Technique is something which could help us cut the noise from Static Analysis
– Using SQL to map requirements to features/tests looks useful
– Belbin Team Roles Theory could be used to associate team members with roles
– Continuous Integration with Performance testing is a must!
– I would like to attend the Certified Agile Tester training. :-)

Agile Attitude

Heard these phrases before?

“How hard can adopting Agile methodology be?”

“If I go on a ‘Certified Scrum-Master’ course, it will all be ok, right?”

It is very common for agile to be misunderstood by all levels of user.

For a new team, starting to implement agile methodologies can be a daunting job and the initial task is often to read a book and take it from there. I’m not saying there is anything wrong with reading a book, infact why not read 2, 3 or more? But in my experience, reading a book only helps the person that is reading it, that book will need to go around everyone in the team, including Project Managers, CEOs, Customers and everyone else involved in the project. Then comes the fun bit of figuring out that everyone has interpreted the book slightly differently and they ‘don’t agree with all the principles’, this should ring alarm bells but in a lot of cases doesn’t. If members of the team don’t agree with all the principles of Agile, then implementing it effectively will prove problematic. Once everyone has read and digested a book (or 2), the next stage should be to get everyone into a room for some interactive training, this can be a chance for an Agile Consultant/Coach to come in and show the team how it works or it could be a chance for the team to get together and discuss everything they have understood, either way the important thing is that EVERYONE buys into it. A Development team that buys in but a Development Manager that doesn’t can cause no end of issues with how the project will be run, or the more common case, the team buys in but the higher level management still want to see plans and schedules and a final release date. Of course, especially for a team new to agile, a firm schedule is very difficult to estimate when a teams velocity is unknown and a ‘firm’ schedule does slightly defeat the point of the term ‘Agile’.

A problem that is maybe a bigger concern is the team that ‘half-heartedly’ buys in to Agile, they send one person on a Certified Scrum Master course and then have all the sprint related meetings (Kick-Off, Planning, Daily Stand-Up etc) but do very little else, so no TDD, no Pair Programming, no backlog burn down. This probably suits the members who are ‘non-committal’ over the process as it means very little extra effort apart from a few meetings. The problem with this is that there will be no marked quality improvement in the deliverables and they won’t neccessarily be delivered any quicker. It is imperative that when implementing the agile processes, that it is disciplined and everyone is fully accepting of what is expected of them.

Agile should be implemented on strong foundations of understanding and willingness to change from top to bottom. By change, the most important one is a change of attitude over the priorities, working software instead of intricate design documents, willingness to respond to change over following a strict plan (not to quote the agile manifesto of course!). The team will need to be motivated and recognised for successes, so maybe a pizza delivery lunch once a sprint or maybe even a day out team building, these kind of gestures will get the team bonding and wanting to work together to produce the goods! The agile process should be a positive change, if it isn’t working, analysis of the root cause should be done and any issues resolved. There may be a chance that the change to agile isn’t right for the team/project and if that’s the case, then a process that brings the best out of the team should be identified. It isn’t for everyone, but those who use it and use it well have never looked back.

It should be understood that a difference in results will not happen overnight, to get the whole team on board and to produce reliable frequent deliverables of higher quality will take time and investment.

So be patient, I believe it’s been said before: “Good things come to those who wait!”

Testing Effectively in an Agile Project

This post will give an overview of how to make a testing team work effectively alongside a development team during an agile sprint.

The first step to ensuring the team works collaboratively is to make sure that all functionality that will be developed within the sprint, is well explained in frequent discussions and that everyone understands how each area will work. While some form of functional documentation for each feature is being distributed and reviewed by all parties, discussions should be happening with all team members about how this feature could be tested. They could possibly hold some brainstorming sessions to put together all their thoughts, having the development team involved in these discussions will not be a hinderance. Asking questions to the development team about any uncertainties that arise in these discussions is imperative, there may be areas that they haven’t thought of either, there may also be areas of deliberatly left to allow the development team a bit of creative freedom and to develop how they feel best suits.

Questions may be asked of the development team all through the sprint cycle and therefore, they should make themselves available as often as feasibly possible (obviously, they have to be left to develop the features as well).

Where possible, it will be ideal to automate as much testing as the team can manage, this will possible cause a bit of pain to set up in the first place but once this system is set up, this will then mean that future cycles will simply be a case of running the scripts that have been configured, rather than manually running every test. Writing an automated system may need to be planned out like a functional feature in the software. There are a few key points to take into account when automating a system:

Identify High Value Tests – Important to identify the test cases which will provide the largest return on their time investment. It shouldn’t be an aim to gain 100% automation coverage as it isn’t practical or cost-effective and would be almost impossible to achieve!

Automate What is Stable – Communicate with developers to make sure automated tests aren’t written for areas of code which are still in a volatile state, tests for stable areas will aid the teams effectiveness and avoid re-work.

Automated Tests Can be Run at Any Time – A large benefit of automated tests is that they can be scheduled for whenever they are needed.

Automation Helps Improve Software Quality – Automated tests generally run faster than a human tester, but it is important that the biggest benefit of automation will be seen in the next release of the product. To see the benefit of automation earlier, it is important to automate long repetitive tests to free up testers for other tasks.

Another method to aid productivity is Test Driven Development (TDD), I won’t go into the full details of this here but suffice to say, it is a practice where the developers write just enough code to pass a failing test. There are a series of simple steps to this:

-Developer works with the tester to write a test
-Developer writes code which make the test pass
-Developer refactors code to evolve it into better designed code

This process can benefit the testers for the following reasons:

Documentation and Working Examples of Code – by writing unit tests, the developers are providing the QA team with working samples of code. Meaning they can gain a stronger understanding of how the system works which will improve the quality of any functional tests then written by the testers.

Improves Code Quality – by having the unit tests written first, the quality of code will be stronger due to it being written to pass the tests. This will make the testers job easier later on in the cycle.

Team Works Together and Collaborates – there will be believers and non-believers or agile on all teams. To make this process work, it may be a good idea to pair up a believer and a non-believer to help spread the practice. If and when a team is all on board with TDD, the benefits for the product will soon become apparent.

If the team can work effectively and testers are able to set up an agile and automated environment, it will eventually lead to the testers being able to try some other methods to find defects such as exploratory testing!

London BUPA 10km

I am running the BUPA London 10km Race on May 30th and I have chosen to raise money for MENCAP.

Having grown up with my sister Amy who has Asperger’s, and living with my fiancee Heather who is a teacher who has worked with children like Amy, MENCAP has always been a charity close to my heart. The work they do to aid children and adults with learning disabilities is amazing. I know from times spent with Amy and her friends that these people deserve the best and Mencap can certainly help to make their lives more rewarding.

So this run has given me the chance to raise money for a great cause and also get into shape for mine and Heather’s Wedding in August!

So, if you can, please dig deep and donate. http://www.justgiving.com/siprior

Thank you

Simon