Testing Certifications – Are they worth the paper they are written on?

There has been a lot of buzz around this topic at recent testing events and in forum discussions between testing professionals, there are a fair few different certifications around but are they of value? A quick google and I found the following certifications:

  • ISTQB – Foundation, Advanced (Test Manager, Test Analyst and Technical Test Analyst) and Practitioner
  • Certified Software Tester (CSTE) – Varying levels and also offer a separate Certified Software Quality Analyst (CSQA)
  • Certified Agile Tester (CAT)
  • Certified Software Test Professional (CSTP)

There are many others and courses which don’t provide ‘Certifications’.

Now, let me state that I have the ISEB/ISTQB Foundation and Intermediate Certificates in Software testing, I attained them a long time ago and at the time they worked well as a base knowledge to get me going in the testing world. That is what I saw them as, a way to gain an understanding before applying the knowledge and diving deeper on my own into the different topics. I have done lots of reading and many free online courses around certain aspects which may have been briefly mentioned in the ISTQB courses.

Now here comes my rant… Doing the courses is one thing, for most of the courses, passing the exam means you have digested the definitions and content from the course and been able to answer most questions correctly in a multiple guess exercise. THIS DOES NOT MAKE YOU A GREAT TESTER! Stating you are a certified tester sends out the wrong message. Putting it in your profile name on linkedin – “John Smith – ISTQB Foundation Certified Software Tester” is wrong, it should not be something you are shouting from the rooftops. You should be saying something like the following:

I am an experienced Software Tester with advanced skills in x,y and z.

(note the lack of mention of the certification). They should not define you as a tester, you should be considered for roles on your skills, not on whether you attended a particular course and passed an exam. Companies also need to stop specifying certifications as part of their job specs, there are plenty of very good testers which may not have them who would do the role better than some of those who have.

There are plenty of courses worth doing out there that don’t give a Certified stamp at the end, ones that still have some form of assessments such as the BBST series (link here), these are courses I would like to get around to doing, but they are not 3 day courses with a multiple choice exam, they require continued effort for a period of weeks/months with practical assessments aswell as an exam.

Another course which comes highly recommended is the Rapid Software Testing course by James Bach, Michael Bolton and Cem Kaner, three industry gurus who give students confidence they can test anything in any timeframe (linky).

I guess the point I’m trying to make is, these ‘Certifications’ should be treated as any other training course, if you feel you will get value attending then go on them, just don’t hold the certifications up as a badge because it shouldn’t give you any additional kudos over other testers who haven’t got the certifications.

Testers are people and people learn in different ways. Testing is a field of work where there are constantly new things to learn, new skills to develop and new concepts to get your head around. Not everything you need to know will be in the syllabus of a certification course.

Like i said before, I have nothing against some of these courses, but they don’t make you the complete tester. Use them as stepping stones to further your knowledge and grow in the testing role.

Personally, I find now, that I learn just as much from reading other testers blogs or attending testing events and hearing new ideas. Learning opportunities can arise in many forms, all will be useful in making you a better tester.

So my answer is, certifications are only worth the paper they are written on, if they are then extended upon, knowledge is applied and not just used as a “that is everything I need to be a good tester” approach.

Spreading the Word – Being the Sociable Tester

For me, testing is a mindset rather than just a role and sometimes that can and does affect other aspects of life. The amount of times I have gone to try and break something intentionally, just to check that it can handle error cases. That would be fine, but doing it to the TV while my wife is trying to watch it may not be the best idea (just for the record, the TV didn’t break) or even worse than that, ‘testing’ one of my 9 month old sons interactive toys!  :) It’s a habit that is sometimes, difficult to avoid…

It is sometimes easy to forget that not everyone has the same attitude towards testing, I was recently asked by someone who is not technical atall:

“Why do you test?” What needs testing?”

I tried to suggest the usual examples:

“Would you be a passenger on a plane if you thought they hadn’t tested that it worked properly?”

“Would you put your child in a car seat if it hadn’t been safety tested?”

Then I suggested software is no different and that everything on a PC/Mac/Phone/Tablet SHOULD be tested in some form before it being deemed good enough to release to its intended audience.

Having this discussion got me thinking of ways I could help improve the attitude towards testing, especially from people who aren’t testers and also improve my own skillset at the same time.

The first place to start  for me was at work, with my colleagues, aside from doing my job to the best of my ability, I have also done the following:

  • I have printed out James Bach’s blog on ‘A Testers Commitments’ (http://www.satisfice.com/blog/archives/652) and put it up next to my desk.
  • I have my software testing books on show so anyone can come and read/borrow/discuss parts of them
  • Not being afraid to talk about testing and suggest ideas to developers on how to make their code more testable, hopefully raising the awareness that they need to think about this before they develop their code
  • I have started putting together mindmaps of how testing could improve projects that currently don’t have the resource
  • Attempted at starting an internal community where anyone who wants to discuss testing, has somewhere that they can share ideas with.

Then there is the external testing community:

  • Joined online communities such as the Software Testing Club
  • Attend conferences, there are plenty of these all year round, some are testing specific, some are software or even IT specific but it’s the people present that make the conferences.
  • read blogs (James Bach’s as mentioned earlier or look on ‘my favourite blogs’ in the menu bar at the top and listen to podcasts (“Testing in the Pub” or “Let’s Talk about Tests, Baby ” to name a couple that I have listened to recently)
  • Started a local tester gathering where people from all around the area can join and share ideas, and not limiting it to just testers but anyone who has an interest with testing (https://priorsworld.wordpress.com/aylesbury-tester-gathering)

Obviously, not everyone want’s to be sociable but I genuinely believe that my skillset and my people skills have improved no end since I started being more open to discussing/asking questions and sharing ideas and stories with other like minded people.

So what’s the next step with non-like minded people? How do we raise the profile of testing so people understand the importance of the job we do?  Some thoughts:

  • Holding some kind of event were non-testing people get chance to try and find problems in a buggy piece of software?
  • Getting out into schools and teaching testing alongside programming in the new curriculum?

Any other thoughts? Would love to hear some ideas. 
Next stop… The world! 😜

Not Familiar with Testers? – Proving Your Worth With a Development Team

Last June, I had the opportunity of a new challenge within my current company, which I grasped with both hands. I moved from a team which had a very well-oiled engineering process, a very stable test framework which gave the team confidence in their product and a team which I had worked in since I graduated from University 7 years earlier. The team I moved to had no active Testers and was still trying to define their engineering processes.

This has proved to be a challenging but enjoyable change and has really made me work hard to show what I can bring to the table and show why testing/QA teams are important.

One of the first actions when I joined the team was to ask to be added to code/peer reviews, previously the code had been reviewed between developers only. This brought some resistance initially –

  • “Why do you need to be on Code Reviews, you’re only QA”
  • “What benefit will it bring having you on the review?”
  • “It will take longer”

I went into more detail on this in a previous post, but the point was that they were not keen to start with and then the next stage was to bombard me with so many code reviews that I had very little time to do anything else. I stuck with it and eventually got through them.

There was initially a reluctance to involve QA, but to be fair to the team of developers, they were open to try once we started to discuss things with them.

Over the next 6 months or so, as a team we worked hard to prove ourselves and we are now at a point where QA are considered in design discussions, code reviews and any major decision making. It’s been a challenge but we are now showing signs of working as one team. There’s still a way to go but we are happy with the progress. But how did we get to this stage? I put it down to 3 things:

1. Getting the Right People – The team being put together has to have a solid set of skills across the team, a good mix of traditional testing skills and good technical developers to work on the frameworks. The testing mindsets need to be there and all of the team need to be strong enough to question things and follow through when something needs doing. It helps in the scenario of the developers being reluctant to work with testers, that the test team have the right people skills to get to know them socially or atleast be prepared to talk to the team about non-project/work topics to build up enough of a relationship that it becomes easy to discuss work topics with them.

2. Find Ways to Be Involved  – Asking questions, listening to conversations, being willing to take tasks which will involve working alongside a developer, are all things which will help aid understanding of the functionality. Know your stuff, if it needs looking up, spend some time reading around the subject so that you can have discussions with the developers about it.  Ultimately, it is about doing all you can so that the developers trust that you know what you’re doing and you will test the product effectively and verify the quality. Set up bug scrubs, or design discussions and invite development along, it’s things like this which will prove that you are all fighting for the same cause.

3. Find Issues Through Testing  –  It might sound obvious, but if the team are previously used to relying on Unit testing and their own dev testing, then the QA testing needs to enhance the coverage and find issues that their testing wouldn’t find. Whatever way the testing needs to be done, put together a framework which will enable the team to spend their time testing, rather than constantly having to fix issues and not be sure whether the issues found are due to the framework or the product under test. Then the next stage in proving worth, is to find issues which may not have otherwise be found, issues which would have caused major problems if released.

Having these 3 things, will give you a fighting chance of a testing team which will work well with development. Maybe we’ve been lucky with the people in our team, but the difference in the last 6-9 months in the attitude towards the testing team has been huge, but here’s hoping it will continue to improve.

TestBash 2015 – More than Just a Conference

I have to admit, I was really excited about attending TestBash in Brighton, it was my first conference for 18 months and there was a real buzz about this one on social media. The schedule looked really interesting and there was a 5 of us attending from work so it was kind of like a team outing.

The journey down to Brighton on the Thursday didn’t go without a hitch, the train from Victoria to Brighton got caught behind a broken down train which meant we got in 30 mins later than planned. By the time we had checked into the hotel and found somewhere to eat, it was too late to attend the Pre-Conference Social, which personally I was gutted as I had been speaking to several other testers on twitter in the weeks before hand and was looking forward to meeting them, the rest of the team seemed quite glad to be going back to the hotel to get some sleep and I can’t really blame them for that.

The following morning, myself and Jesus from our team got up at 6am and joined the Pre-Conference Run, we completed the 5km run along the promenade and were back at the hotel having breakfast by 7.30.

We arrived at the Brighton Dome and from the moment we walked it, TestBash had a different feel about it to other conferences I had attended, whether it was the ninja stickers we wore with our names on rather than the formal name badges at other events, or the Ministry of Testing t-shirts, but there was a real feel of community.

Here is the Intel Security team who attended the conference

There were lots of great talks during the day with lots of interesting concepts:

  • It was interesting to discover the difference between the testing and release process of IOS and Android apps.
  • I was fascinated by Martin Hynie’s story of how changing the name of the Test team to Tech Research then to Business Analysts then back to Test caused the company to treat the same group of individuals differently and really show the power of Job titles.
  • Vernon Richards gave an amusing look into some of the phrases that are thrown around about testing such as “Anyone can test” or questioning why testing didn’t find the one bug that caused problems in production. He also gave an example of how to deal with a product manager who wants a number for how long testing will take and doesn’t get the answer he wants.
  • Maaret Pyhajarvi’s session really showed that Quality isn’t the responsibility of just the testers, in fact Maaret went as far to say that Quality is built by the developers, testers just inform of the quality. This came from her account of working as a solitary tester on a team of developers and seeing that initially the quality went down with addition of a tester as the developers became less vigilant with their testing before handing it over, as they expected Maaret to pick up all the testing. She showed us how she managed to get them on board and as a team improve the quality.
  • Iain McCowatt discussed how some people have the intuition and tacit knowledge to see bugs whereas others have to work methodically to find bugs, he then went into ways to harness the diversity amongst a test team.
  • The concept that stuck with Matthew Heusser’s talk on getting rid of release testing was the fact that changing the process shouldn’t be done all at once and the best way is to try one or two new stages first and make gradual changes to the process. (I also liked the fact that he worked on his slides on his tablet as he presented)
  • Karen Johnson gave a very thought provoking talk on how to ask questions, this really resonated with me and I can certainly see ways to get more out of people when I’m asking questions

There were 3 talks which really stood out for me.

The Rapid Software Testing Guide to What You Meant To Say – Michael Bolton

I had interacted with Michael a few years previously when he had given me some constructive criticism on one of my earlier blog posts on this site, so I was intrigued to see what he was presenting. This was a very interesting talk, Michael is a very engaging speaker and it’s clear why he is one of the most respected members of the testing community.

The concept of this session was to remind us of some of the phrases which are commonly used by testers which can cause misunderstanding or misconception. He showed some examples where he exchanged words like testing for “all of development” in phrases such as “Why is testing taking so long?” and “Can’t we just automate testing”. Suggesting that people may use testing as a scapegoat in this particular example, when infact the whole process should take the blame.

Michael went on to talk about how safety language should be used, phrases such as “…yet” and “So far” and not making statements such as “It works” and instead stating “So far, the areas which I have tested appear to meet some requirements” or something like that….

The discussion of testing vs checking came up (which was part of the issue Michael had with my earlier blog post…. I’ve since done the necessary reading to know the difference) and showing how checking fits into the testing process.

Overall, I think I learnt that it can sometimes be very easy to make statements which may raise expectations more than they should be or give the wrong message completely. I will certainly be ensuring to use safety language more often in the future.

I also feel it would be really useful to go on the Rapid Software Testing course. Something to look into this year.

Why I Lost My Job as a Test Manager and What I Learnt as A Result – Stephen Janaway

I hadn’t heard Stephen present before and he came across very well. His talk covered how when he was working as a Test Manager, with the agile process, he was managing individuals in several different teams, while there was a development manager with each team. He talked of the difficulties in decision making and how the products were slow in being developed/released.

Stephen then described what happened next, test managers and Development managers being removed from their roles, a delivery manager being put in each team and how the process improved. The question then was what happened with the Test Manager? Stephen explained the roles that he was now involved in, such as coaching management on testing, and how to manage testers, setting up testing communities internally so that the testers still have like minded people to discuss testing issues with now that they haven’t got a test manager and generally being an advocate for testing/quality within the organisation.

It showed that Test Managers needed to be adaptable and make decisions to go along a slightly different path and this is the way the testing industry seems to be going, so it was interesting and reassuring that there are other options out there.

The other point that hit me during this presentation was that of the internal testing communities, we have lots of individual test teams working on different projects, all developing their own automation frameworks and using different tools, it would be good to bring everyone together to share ideas, and maybe get some external speakers of the testing community in, to inspire them.

I really enjoyed Stephen’s talk and it gave me plenty of food for thought about the future.

Automation in Testing – Richard Bradshaw

Richard’s talk resonated with me for one reason, he explained how in his early years he had been seen at the automation guy and would try and automate everything, then he realised that too much had been automated and a benefit was no longer being seen. I have seen for myself, how teams have been so focused on having all of their testing automated, that they actually spend more time fixing failing tests when the next build is completed that they do writing any other tests. I whole-heartedly agreed with Richard when he stated that automation should be used to assist with manual testing, (writing scripts for certain actions to speed up the process) rather than relying on automated tests for everything.

This does seem to be a hot topic for discussion as there is the question of automated regression tests/checks, and automated non-functional testing, how should these be approached? This presentation definitely gave me a lot to think about with how to improve how we use automation when getting back into the office.

Richard presented this really well and I would say it was my favourite talk of the day.

I have to say that the schedule from start to finish was enjoyable, the lunch was delicious, the organisation of the day was fantastic. I honestly can’t wait to go back next year.

I really felt like the testing community is a really great place to be, so many great people, great minds with interesting ideas and a great chance to improve yourself by being able to attend conferences like this.

I am really glad that I found http://softwaretestingclub.com and was able to find details of the conference on there. The next step for me is to help set up the internal testing community and get them to look at it too. Maybe we can have a bigger Intel Security contingent next year, maybe I will find something to present. :)

Overcoming the Resistance – QA Involvement in Peer Reviews

Maybe I was quite naive about peer reviews but my experience previously was that it was a natural process to have QA/testers involved in code reviews alongside developers. Whenever a new feature or a bug fix was implemented, before the code was checked in, the developer would set up a walk through with another developer and a member of the QA team. The team I was part of was quite a mature engineering team with a defined coding standard which was ingrained into the team. Never was a second thought given to the fact that development would commit code without showing it and talking through it with QA. This would provoke discussions around how to test the features and whether the development written unit tests had enough code coverage to QA’s satisfaction. Moving to a different team, I have seen that this isn’t necessarily part of the process, peer reviews happen between devlopers, it has never been considered to involve QA.

I have read a lot about this subject and actually, the level of maturity around code reviews of this first team is relatively rare in the industry. So, why is it so rare? I guess it depends on perspective and the level of testing being done:

  • If ‘black box’ testing is the main form of testing, then I guess not knowing about the code is the ‘right’ thing to do?
  • If ‘white box’ testing is used, then knowing the functionality you are testing is paramount, other than functional specifications, the best way of seeing how the area being tested works is to review the code.

All testing I have ever done has been a mixture of both of these, there has obviously been a level of testing which I can pull straight out the user guide/functional specification and others where I have needed to know intricate details of the code to be able to ensure I am testing all feasible code paths.

I have always found it beneficial to be involved in peer reviews, even if I don’t say anything during the review, but just soak in how the code works and write down ideas for testing. But usually, I will ask questions such as “what if i entered this here?”, “what if i did this?”, using the meeting to also force the developer to think about their implementation, rather than just plodding through code line-by-line.

So why is it so alien to some teams to involve QA? Here are some of my thoughts on some common phrases used:

  • QA don’t have the skills to review code – Not every QA resource will know the syntax of the particular language, but does that mean by sitting with developers and understanding how the code works, they won’t find issues or raise questions which provoke the developer to improve their code?
  • Having QA involved will delay build/release – Only if you treat QA as a separate entity to development and do separate peer reviews. If they are involved in same peer review, then it shouldn’t make much different. We are here to prove the quality of the work, not be a hindrance.
  • It’s only a one line change, why would QA need to review that? – On one hand, yes it is only one line but the context of that one line could have an impact on some existing testing, or just having QA aware of the change could be useful.

I’m not for a second suggesting that development teams may be dead against the idea, but I think as we move to an increasingly ‘agile’ world, separating Development and QA out at this level needs to change, we should be promoting a ‘One Team’ approach where value is provided by everyone involved. QA can bring value to code reviews. Quality should be built in at the start and the earlier this can be proved, the better it is going forward. It needs to be clear that performing a critique of the code, is not performing a critique of the developer.

Some quick wins may be needed to win the development team over:

  • Read up on the functionality before you attend the review, so you have a basic understanding of how it was described to be developed
  • Ask logical questions
  • Discuss options for testing
  • Flip it and have development review tests (share both activities)

Baby steps are needed for progress, find a way to get involved for some small tasks to start with and build up trust with the developers that you are not just there to be a pain in the backside but actually, if you work side by side with the development team, it will improve the product delivered.

Any thoughts on this topic would be most appreciated.

Lessons Learned from BCS SIGiST Summer 2013 Conference

Thought I could kill two ‘proverbial’ birds with one ‘proverbial’ stone here and document the latest conference I attended while also writing my first post on here in 2 years!

It was an early start, the team of us that were going met at the rail station and got the train into London Marylebone. A short walk to the venue and we were there ready to start. A quick cup of tea and a biccy, then into the lecture theatre for the opening keynote:

The Divine Comedy of Software Engineering by Farid Tejani (Ignitr Consulting)

Farid confidently presented a talk which outlined the changes that many industries have faced over the past decade, industries which have been ‘disrupted’ by the digital world and stating that “Instability is the new Stability“. Examples of Blockbuster Videos and AOL ‘Free Internet’ CDs were used as examples for how industries have been disrupted. He then said that Software Engineering could be the next industry to be ‘disrupted’ by the digital revolution. If I understand what Farid meant here, it was that unless we adapt and change with our surroundings, companies will be left behind as their function could become obsolete (much like Blockbuster Videos).

He then turned to Software Testing and discussed how we were going wrong as an industry, suggesting some common misconceptions about testing which need to be corrected such as the following:

Testing is a risk mitigation exercise
Testing is an efficient way of finding defects
Testing adds quality to software

He listed 9 of these in total and explained why he felt these were myths. Farid then covered the ever-contentious issue of using agile testing over waterfall/v-model and how he thought that all companies need to move to a more ‘agile’ model to avoid being disrupted in the future. Personally, I can see what Farid is suggesting here, however there will always be companies and industries which will continue to use non-agile methodologies purely because they have to or they are mandated to (such as Aviation industry). That doesn’t mean they can’t evolve the way they work within their confines though, there needs to be a better feedback loop at each stage of a project/product/programme.

Overall, it was an interesting talk and the obvious message that came out of this was to ensure that as Testing professionals, we keep ahead of the curve so to speak and ensure we are aware of changes that may affect our ‘Multi-Billion $ Industry

Then time for a tea break and another biscuit or two (maybe there were muffins at this point too :-))

Expanding our Testing Horizons by Mark Micallef (University of Malta)

Mark Micallef has experience in the industry having managed the BBC news testing team amongst other roles and has now moved to Academia to lecture on and research Software Testing. He brought his knowledge to SIGiST to talk about some new ideas/techniques.

Mark started the talk by saying that having attended many conferences, he has found that the industry seems to be continuously talking about the same topics – Agile, Automation and Test Management amongst others. He felt that as industry professionals, we should be striving to identify new and better ways to test and find ‘The Next Big Thing’ which was Web Automation back in 2007. It was suggested that this could happen by the industry working closer with Academics. There may be obvious differences with the way the two industries work but there would be benefit for both sides. Academics tend to be all about the understanding, they need to 100% know exactly what they intend to do. They will also look for perfect, complete and clean solutions. The differences to the Software Industry are that professionals will tend be pragmatic and develop a solution that works and ‘adds value’ rather than being perfect and complete. This obviously doesn’t mean that one is wrong, just they have different ways to work.

He then suggested 3 techniques which could be used to improve testing.

The first insight was “Catching Problems Early, Makes them Easier to Fix”

This was related to Static analysis tools but took it one stage further and suggested that most Static analysis tools will give too many alerts which may lead to cognitive overload and eventually this will lead to the tool or technique being abandoned. This brought us to the suggested concept of ‘Actionable Alert Identification Technique (AAIT)‘. This is the idea that you apply some criteria to the output of alerts such as areas of code you wish to focus on, certain junior developers code you wish to look at or just code submitted in the last week and then by applying this criteria, you will be given a ranking of the alerts in a prioritised order. This will obviously depend on some form of automation set up to perform this analysis but the outcome would be worthwhile.

The second insight was “We Should be Aware of the Effectiveness of Our Test Suites”

Mark started off suggesting that Code Coverage tools should be used to work out test coverage, but that it may be possible (and he showed this using simple examples) that you can achieve 100% line, decision and function coverage without actually testing anything useful, the example Mark showed here had a function meant to multiply two numbers together, the assert had that the answer if you passed 5 & 1 in, would be 5! Simple enough, but if you change the multiply sign (*) for divide (/), the answer if you pass these two numbers in (in the same order) would still be 5. Therefore effectively rendering this test as useless!! The suggested concept here was the idea of ‘Mutation Testing‘. Mutation testing is where you create multiple ‘mutants’ of your product by modifying code then running these ‘mutants’ through your tests, it is then a bad thing if all your tests pass as if nothing has changed. Ideally, even a small change in code should cause at least one test to fail. Once they have failed, you can identify the changes needed to improve the test cases.

This did then highlight some problems with Mutation Testing such as it is expensive to generate mutants and execute the tests. This sounded like a very interesting technique despite the problems and will certainly be something I intend to investigate further.

The third and final insight was “Testing will Never Provide Guarantees that Bugs do not Exist”

Mark talked about how currently testing is seen as a way of following every possible path in the code to improve the coverage. Mark suggested that an additional concept of ‘Runtime Testing, could help. This is where the test is exactly what the user may be doing right now, so this would work well for web pages or apps/programs where there is a lot of user interaction. This will require mathmatically working out the exact paths users may take when using the software/web site and this will not necessarily come naturally to everyone. There may also be performance overheads if you are trying to test and measure what the user is currently doing.

Mark then went on to recommend how working with research and Academia can have huge benefits and that we should look into some of the whitepapers available on different topics to give ideas.

This was a great presentation which did give some new ideas and new ways of identifying ideas by using research by academics.

Requirements Testing: Turning Compliance into Commercial Advantage by Mike Bartley (Test and Verification Solutions)

Following straight on from Mark’s presentation was this one on Requirements Testing. Mike started off by using the example of buying a laptop and how you have a set of things you want when you go to purchase a new one, then when you buy a new laptop, are you able to map the features of the laptop you buy back to the requirements you originally had? It was an interesting point, sometimes people have a pre-concieved idea of what they want and don’t always get exactly what they want.

Mike then went on to software requirements and explained that they are a lot more complex. He then said that poor and changing requirements have been the main cause of project failure for years. Which highlights the need to capture requirements and store them. Mike then asked the question “How do we make sure requirements are implemented and tested?” The obvious answer here would be to ensure that they are tracked and measured as the project carries on. Mike took this one step further and talked about mapping requirements to features and features to implementation and finally to tests. Setting up a uni-directional flow chart of these mappings would highlight ‘Test Orphans’ (tests which don’t map to any requirements, features or implementations) and ‘Test Holes’ (gaps where there are no tests mapping to requirements, features or implementations). This highlighted the importance of knowing the exact purpose of every test and I’m sure most projects which have gone through several releases will have orphaned tests that will not be deleted purely for sentimental reasons. These orphans obviously waste time and effort and of course the test holes will highlight a risk as a requirement will be missing a test.

Mike then went on to talk about Requirements Management and how there is a reasonable amount of tools which provide the ability to map requirements to features, designs, units and even to code but not necessarily to tests or at least not with the ability to show test status or results.

The presentation then went down the track of showing how SQL can be used to create bi-directional requirements mapping meaning we can relate test status back to requirements themselves. This sounds like a useful idea which can bring a good Return on Investment, although there would be an initial cost to build the database, and to add all requirements and test information in to the DB. But the potential for the business advantage is massive, all holes and orphans could be identified quickly along with the analysis of the risk and impact.

The potential of this idea is huge and again is something that will require some additional investigation.

It was then time for lunch and have to say the food was delicious. Moroccan Lamb with wedges and salad, followed by Eton Mess! We did a little bit of networking and then decided for half an hour, we would go and wander around Regents Park and get some fresh air.

Improving Quality Through Building a More Effective Scrum Team by Pete George (Pelican Associates)

With my interest and knowledge in the Scrum area, this talk was probably the one I most looked forward to before the conference. Pete George clearly knows his stuff and was able to keep everyone interested throughout his talk. He started off by talking about the “Marshmallow Challenge” (http://marshmallowchallenge.com/Instructions.html) and how it is a useful challenge for teams to try. He then showed a graph about how different industries had fared in the challenge, obviously engineers and architects came out on top as they were able to build the highest structure but the next highest was ‘Kinder Garten children’, this showed more about the techniques used by the other teams, which would generally be a case of spending 90% of the time building a structure out of spaghetti, then putting the marshmallow on the top with about a minute to go and watching it fall, then quickly putting a structure together in the last 30 seconds that holds the marshmallow. The difference with the kinder garten kids was that they would use a different strategy completely and just build a stable structure with the marshmallow, then continue to enhance (effectively doing agile without realising it!). This was an interesting exercise which may be worth trying with our team.

Pete then gave a brief succinct overview of the scrum process using the 4 Artifacts – 3 Roles – 4 Ceremonies description. He then talked about the fact that with the continuous inspect and adapt approach, there was quality control built into the scrum framework. But the fact that Pete addressed next was the crucial point, the team is only as good as the people in that team. Companies spend all the money they have on changing the process the teams work in but if the teams aren’t working well together then projects will still fail. The problems will be due to the fact that some people are set in their ways and refuse to change, these may be crucial people technically, but when it comes to teams, they don’t seem to fit. This is where it comes down to things such as team building (socialising as a team), but also looking at the Belbin Team Roles theory. The idea being that any good team will cover off all 9 roles shown below:

Belbin Team Roles

There are some interesting roles here, in my head, I’m already trying to identify who is which role within my own team.

It was a useful talk, obviously Pete had to aim the talk at people who didn’t know a lot about agile/scrum as well as ones who did and I thought he did a great job at covering both camps.

Mission Impossible: Effective Performance Evaluation as Part of a CI Approach by Mark Smith (Channel 4) and Andy Still (Intechnica)

This talk covered how Channel 4 took an approach to include performance testing within their Continuous Integration setup to get constant feedback on their performance. Andy Still from Intechnica started off the presentation talking about the CI approach. He then mentioned that the first thing to do when considering Performance testing for Continuous Integration is to ensure that Performance is considered as a first class citizen, it should be treated as important as any functional requirement. Performance requirements should be defined and documented alongside these functional requirements at the start of the project. Andy then said that it was important that the tests should have realistic pass/fail relevant to the stage of the project and the tests should be able to run without any human interaction/interpretation. The next point was an interesting point, Andy mentioned that Performance is a Linear problem not a binary one, meaning that the check in that broke the build, may not be the one which caused the failure, it may be that the last check-in ‘tipped the scale’ but the previous one may have pushed the scales close to the limit from being nowhere near the limit. Andy then started a small debate by asking whether the real challenges to successful implementation of performance tests within CI were down to process or tools. He presented both sides of the argument well and left us to decide. Personally, I was split as I can see that both Process and Tools provide issues that could prove large challenges for teams trying to set this up.

Andy then handed over to Mark who talked more about Channel 4’s particular needs for CI Performance testing. Mark mentioned that performance issues can sometimes be more challenging to fix that functional issues and that build failures and short feedback loops can stop these breakages making it into the codebase. He then discussed whether we would want pure CI performance testing, stating it may be disruptive at the start of the project, but on stable projects, it would provide early feedback on performance issues. He then talked about Channel 4’s particular choice of tools and they chose Jenkins for the build management, Jmeter for Load testing and webPageTest for front end instrumentation. Mark then went through how their particular system works.

Finally, Mark gave a brief description of what CI testing gives us which was very short feeback loops, the ability to fail builds based on multiple metrics and you would get performance trending data.

Another very interesting talk and possibly my favourite presentation of the day, very well presented and it was a topic which could be of use.

Time for the final tea-break before the closing keynote.

Keep Calm and Use TMMi by Clive Bates (Experimentus)

Clive started by introducing who Experimentus were and what they did, which was an IT services company who help optimise their clients approach to Software Quality Management. He then stated that testing needs to improve because software fails and we need to be efficient at stopping these failures. He talked about doing things the right way and getting rid of the barriers to quality work. The place to start was to recognise there is a desire to do better and gather evidence of the problem. Clive then mentioned the 7-point methodology used to help companies look at the process and make improvements called IMPROVE (Initiate, Measure, Prioritise & Plan, Define/Re-Define, Operate, Validate, Evolve). Clive then goes on to talk about Test Maturity Model Integration (TMMi). Which is a staged assessment model like CMMi, but focused at the testing process. There are 5 levels of the process from Level 1 – Initial to Level 5 – Optimising. From what Clive said, I understood that most companies would be working towards their Level 2/3 as these are the levels that require the most effort, levels 4 & 5 then offer icing on the cake so to speak (There is more to it than that but you get the idea).

Clive talked about why a client would use the TMMi model, stating it was the de-facto International Standard to measure test maturity and it is focused on moving organisations from defect detection to defect prevention.

He then talked about the features of an assessment and talked about the types of companies who had gone through the assessment so far. It was an interesting talk and was something different from the rest of the talks. I didn’t feel there was anything that I could personally take back from this, other than ask the powers-that-be about whether they were aware of this.

That was the final talk of a full enjoyable days presentations. I heard good things about the two workshops but unfortunately there weren’t any places left when I looked at going to them.

So things I will take away:

– Mutation Testing looks like a useful technique to try
– Actionable Alert Identification Technique is something which could help us cut the noise from Static Analysis
– Using SQL to map requirements to features/tests looks useful
– Belbin Team Roles Theory could be used to associate team members with roles
– Continuous Integration with Performance testing is a must!
– I would like to attend the Certified Agile Tester training. :-)

Agile Attitude

Heard these phrases before?

“How hard can adopting Agile methodology be?”

“If I go on a ‘Certified Scrum-Master’ course, it will all be ok, right?”

It is very common for agile to be misunderstood by all levels of user.

For a new team, starting to implement agile methodologies can be a daunting job and the initial task is often to read a book and take it from there. I’m not saying there is anything wrong with reading a book, infact why not read 2, 3 or more? But in my experience, reading a book only helps the person that is reading it, that book will need to go around everyone in the team, including Project Managers, CEOs, Customers and everyone else involved in the project. Then comes the fun bit of figuring out that everyone has interpreted the book slightly differently and they ‘don’t agree with all the principles’, this should ring alarm bells but in a lot of cases doesn’t. If members of the team don’t agree with all the principles of Agile, then implementing it effectively will prove problematic. Once everyone has read and digested a book (or 2), the next stage should be to get everyone into a room for some interactive training, this can be a chance for an Agile Consultant/Coach to come in and show the team how it works or it could be a chance for the team to get together and discuss everything they have understood, either way the important thing is that EVERYONE buys into it. A Development team that buys in but a Development Manager that doesn’t can cause no end of issues with how the project will be run, or the more common case, the team buys in but the higher level management still want to see plans and schedules and a final release date. Of course, especially for a team new to agile, a firm schedule is very difficult to estimate when a teams velocity is unknown and a ‘firm’ schedule does slightly defeat the point of the term ‘Agile’.

A problem that is maybe a bigger concern is the team that ‘half-heartedly’ buys in to Agile, they send one person on a Certified Scrum Master course and then have all the sprint related meetings (Kick-Off, Planning, Daily Stand-Up etc) but do very little else, so no TDD, no Pair Programming, no backlog burn down. This probably suits the members who are ‘non-committal’ over the process as it means very little extra effort apart from a few meetings. The problem with this is that there will be no marked quality improvement in the deliverables and they won’t neccessarily be delivered any quicker. It is imperative that when implementing the agile processes, that it is disciplined and everyone is fully accepting of what is expected of them.

Agile should be implemented on strong foundations of understanding and willingness to change from top to bottom. By change, the most important one is a change of attitude over the priorities, working software instead of intricate design documents, willingness to respond to change over following a strict plan (not to quote the agile manifesto of course!). The team will need to be motivated and recognised for successes, so maybe a pizza delivery lunch once a sprint or maybe even a day out team building, these kind of gestures will get the team bonding and wanting to work together to produce the goods! The agile process should be a positive change, if it isn’t working, analysis of the root cause should be done and any issues resolved. There may be a chance that the change to agile isn’t right for the team/project and if that’s the case, then a process that brings the best out of the team should be identified. It isn’t for everyone, but those who use it and use it well have never looked back.

It should be understood that a difference in results will not happen overnight, to get the whole team on board and to produce reliable frequent deliverables of higher quality will take time and investment.

So be patient, I believe it’s been said before: “Good things come to those who wait!”