Thought I could kill two ‘proverbial’ birds with one ‘proverbial’ stone here and document the latest conference I attended while also writing my first post on here in 2 years!
It was an early start, the team of us that were going met at the rail station and got the train into London Marylebone. A short walk to the venue and we were there ready to start. A quick cup of tea and a biccy, then into the lecture theatre for the opening keynote:
The Divine Comedy of Software Engineering by Farid Tejani (Ignitr Consulting)
Farid confidently presented a talk which outlined the changes that many industries have faced over the past decade, industries which have been ‘disrupted’ by the digital world and stating that “Instability is the new Stability“. Examples of Blockbuster Videos and AOL ‘Free Internet’ CDs were used as examples for how industries have been disrupted. He then said that Software Engineering could be the next industry to be ‘disrupted’ by the digital revolution. If I understand what Farid meant here, it was that unless we adapt and change with our surroundings, companies will be left behind as their function could become obsolete (much like Blockbuster Videos).
He then turned to Software Testing and discussed how we were going wrong as an industry, suggesting some common misconceptions about testing which need to be corrected such as the following:
– Testing is a risk mitigation exercise
– Testing is an efficient way of finding defects
– Testing adds quality to software…
He listed 9 of these in total and explained why he felt these were myths. Farid then covered the ever-contentious issue of using agile testing over waterfall/v-model and how he thought that all companies need to move to a more ‘agile’ model to avoid being disrupted in the future. Personally, I can see what Farid is suggesting here, however there will always be companies and industries which will continue to use non-agile methodologies purely because they have to or they are mandated to (such as Aviation industry). That doesn’t mean they can’t evolve the way they work within their confines though, there needs to be a better feedback loop at each stage of a project/product/programme.
Overall, it was an interesting talk and the obvious message that came out of this was to ensure that as Testing professionals, we keep ahead of the curve so to speak and ensure we are aware of changes that may affect our ‘Multi-Billion $ Industry‘
Then time for a tea break and another biscuit or two (maybe there were muffins at this point too :-))
Expanding our Testing Horizons by Mark Micallef (University of Malta)
Mark Micallef has experience in the industry having managed the BBC news testing team amongst other roles and has now moved to Academia to lecture on and research Software Testing. He brought his knowledge to SIGiST to talk about some new ideas/techniques.
Mark started the talk by saying that having attended many conferences, he has found that the industry seems to be continuously talking about the same topics – Agile, Automation and Test Management amongst others. He felt that as industry professionals, we should be striving to identify new and better ways to test and find ‘The Next Big Thing’ which was Web Automation back in 2007. It was suggested that this could happen by the industry working closer with Academics. There may be obvious differences with the way the two industries work but there would be benefit for both sides. Academics tend to be all about the understanding, they need to 100% know exactly what they intend to do. They will also look for perfect, complete and clean solutions. The differences to the Software Industry are that professionals will tend be pragmatic and develop a solution that works and ‘adds value’ rather than being perfect and complete. This obviously doesn’t mean that one is wrong, just they have different ways to work.
He then suggested 3 techniques which could be used to improve testing.
The first insight was “Catching Problems Early, Makes them Easier to Fix”
This was related to Static analysis tools but took it one stage further and suggested that most Static analysis tools will give too many alerts which may lead to cognitive overload and eventually this will lead to the tool or technique being abandoned. This brought us to the suggested concept of ‘Actionable Alert Identification Technique (AAIT)‘. This is the idea that you apply some criteria to the output of alerts such as areas of code you wish to focus on, certain junior developers code you wish to look at or just code submitted in the last week and then by applying this criteria, you will be given a ranking of the alerts in a prioritised order. This will obviously depend on some form of automation set up to perform this analysis but the outcome would be worthwhile.
The second insight was “We Should be Aware of the Effectiveness of Our Test Suites”
Mark started off suggesting that Code Coverage tools should be used to work out test coverage, but that it may be possible (and he showed this using simple examples) that you can achieve 100% line, decision and function coverage without actually testing anything useful, the example Mark showed here had a function meant to multiply two numbers together, the assert had that the answer if you passed 5 & 1 in, would be 5! Simple enough, but if you change the multiply sign (*) for divide (/), the answer if you pass these two numbers in (in the same order) would still be 5. Therefore effectively rendering this test as useless!! The suggested concept here was the idea of ‘Mutation Testing‘. Mutation testing is where you create multiple ‘mutants’ of your product by modifying code then running these ‘mutants’ through your tests, it is then a bad thing if all your tests pass as if nothing has changed. Ideally, even a small change in code should cause at least one test to fail. Once they have failed, you can identify the changes needed to improve the test cases.
This did then highlight some problems with Mutation Testing such as it is expensive to generate mutants and execute the tests. This sounded like a very interesting technique despite the problems and will certainly be something I intend to investigate further.
The third and final insight was “Testing will Never Provide Guarantees that Bugs do not Exist”
Mark talked about how currently testing is seen as a way of following every possible path in the code to improve the coverage. Mark suggested that an additional concept of ‘Runtime Testing, could help. This is where the test is exactly what the user may be doing right now, so this would work well for web pages or apps/programs where there is a lot of user interaction. This will require mathmatically working out the exact paths users may take when using the software/web site and this will not necessarily come naturally to everyone. There may also be performance overheads if you are trying to test and measure what the user is currently doing.
Mark then went on to recommend how working with research and Academia can have huge benefits and that we should look into some of the whitepapers available on different topics to give ideas.
This was a great presentation which did give some new ideas and new ways of identifying ideas by using research by academics.
Requirements Testing: Turning Compliance into Commercial Advantage by Mike Bartley (Test and Verification Solutions)
Following straight on from Mark’s presentation was this one on Requirements Testing. Mike started off by using the example of buying a laptop and how you have a set of things you want when you go to purchase a new one, then when you buy a new laptop, are you able to map the features of the laptop you buy back to the requirements you originally had? It was an interesting point, sometimes people have a pre-concieved idea of what they want and don’t always get exactly what they want.
Mike then went on to software requirements and explained that they are a lot more complex. He then said that poor and changing requirements have been the main cause of project failure for years. Which highlights the need to capture requirements and store them. Mike then asked the question “How do we make sure requirements are implemented and tested?” The obvious answer here would be to ensure that they are tracked and measured as the project carries on. Mike took this one step further and talked about mapping requirements to features and features to implementation and finally to tests. Setting up a uni-directional flow chart of these mappings would highlight ‘Test Orphans’ (tests which don’t map to any requirements, features or implementations) and ‘Test Holes’ (gaps where there are no tests mapping to requirements, features or implementations). This highlighted the importance of knowing the exact purpose of every test and I’m sure most projects which have gone through several releases will have orphaned tests that will not be deleted purely for sentimental reasons. These orphans obviously waste time and effort and of course the test holes will highlight a risk as a requirement will be missing a test.
Mike then went on to talk about Requirements Management and how there is a reasonable amount of tools which provide the ability to map requirements to features, designs, units and even to code but not necessarily to tests or at least not with the ability to show test status or results.
The presentation then went down the track of showing how SQL can be used to create bi-directional requirements mapping meaning we can relate test status back to requirements themselves. This sounds like a useful idea which can bring a good Return on Investment, although there would be an initial cost to build the database, and to add all requirements and test information in to the DB. But the potential for the business advantage is massive, all holes and orphans could be identified quickly along with the analysis of the risk and impact.
The potential of this idea is huge and again is something that will require some additional investigation.
It was then time for lunch and have to say the food was delicious. Moroccan Lamb with wedges and salad, followed by Eton Mess! We did a little bit of networking and then decided for half an hour, we would go and wander around Regents Park and get some fresh air.
Improving Quality Through Building a More Effective Scrum Team by Pete George (Pelican Associates)
With my interest and knowledge in the Scrum area, this talk was probably the one I most looked forward to before the conference. Pete George clearly knows his stuff and was able to keep everyone interested throughout his talk. He started off by talking about the “Marshmallow Challenge” (http://marshmallowchallenge.com/Instructions.html) and how it is a useful challenge for teams to try. He then showed a graph about how different industries had fared in the challenge, obviously engineers and architects came out on top as they were able to build the highest structure but the next highest was ‘Kinder Garten children’, this showed more about the techniques used by the other teams, which would generally be a case of spending 90% of the time building a structure out of spaghetti, then putting the marshmallow on the top with about a minute to go and watching it fall, then quickly putting a structure together in the last 30 seconds that holds the marshmallow. The difference with the kinder garten kids was that they would use a different strategy completely and just build a stable structure with the marshmallow, then continue to enhance (effectively doing agile without realising it!). This was an interesting exercise which may be worth trying with our team.
Pete then gave a brief succinct overview of the scrum process using the 4 Artifacts – 3 Roles – 4 Ceremonies description. He then talked about the fact that with the continuous inspect and adapt approach, there was quality control built into the scrum framework. But the fact that Pete addressed next was the crucial point, the team is only as good as the people in that team. Companies spend all the money they have on changing the process the teams work in but if the teams aren’t working well together then projects will still fail. The problems will be due to the fact that some people are set in their ways and refuse to change, these may be crucial people technically, but when it comes to teams, they don’t seem to fit. This is where it comes down to things such as team building (socialising as a team), but also looking at the Belbin Team Roles theory. The idea being that any good team will cover off all 9 roles shown below:
There are some interesting roles here, in my head, I’m already trying to identify who is which role within my own team.
It was a useful talk, obviously Pete had to aim the talk at people who didn’t know a lot about agile/scrum as well as ones who did and I thought he did a great job at covering both camps.
Mission Impossible: Effective Performance Evaluation as Part of a CI Approach by Mark Smith (Channel 4) and Andy Still (Intechnica)
This talk covered how Channel 4 took an approach to include performance testing within their Continuous Integration setup to get constant feedback on their performance. Andy Still from Intechnica started off the presentation talking about the CI approach. He then mentioned that the first thing to do when considering Performance testing for Continuous Integration is to ensure that Performance is considered as a first class citizen, it should be treated as important as any functional requirement. Performance requirements should be defined and documented alongside these functional requirements at the start of the project. Andy then said that it was important that the tests should have realistic pass/fail relevant to the stage of the project and the tests should be able to run without any human interaction/interpretation. The next point was an interesting point, Andy mentioned that Performance is a Linear problem not a binary one, meaning that the check in that broke the build, may not be the one which caused the failure, it may be that the last check-in ‘tipped the scale’ but the previous one may have pushed the scales close to the limit from being nowhere near the limit. Andy then started a small debate by asking whether the real challenges to successful implementation of performance tests within CI were down to process or tools. He presented both sides of the argument well and left us to decide. Personally, I was split as I can see that both Process and Tools provide issues that could prove large challenges for teams trying to set this up.
Andy then handed over to Mark who talked more about Channel 4’s particular needs for CI Performance testing. Mark mentioned that performance issues can sometimes be more challenging to fix that functional issues and that build failures and short feedback loops can stop these breakages making it into the codebase. He then discussed whether we would want pure CI performance testing, stating it may be disruptive at the start of the project, but on stable projects, it would provide early feedback on performance issues. He then talked about Channel 4’s particular choice of tools and they chose Jenkins for the build management, Jmeter for Load testing and webPageTest for front end instrumentation. Mark then went through how their particular system works.
Finally, Mark gave a brief description of what CI testing gives us which was very short feeback loops, the ability to fail builds based on multiple metrics and you would get performance trending data.
Another very interesting talk and possibly my favourite presentation of the day, very well presented and it was a topic which could be of use.
Time for the final tea-break before the closing keynote.
Keep Calm and Use TMMi by Clive Bates (Experimentus)
Clive started by introducing who Experimentus were and what they did, which was an IT services company who help optimise their clients approach to Software Quality Management. He then stated that testing needs to improve because software fails and we need to be efficient at stopping these failures. He talked about doing things the right way and getting rid of the barriers to quality work. The place to start was to recognise there is a desire to do better and gather evidence of the problem. Clive then mentioned the 7-point methodology used to help companies look at the process and make improvements called IMPROVE (Initiate, Measure, Prioritise & Plan, Define/Re-Define, Operate, Validate, Evolve). Clive then goes on to talk about Test Maturity Model Integration (TMMi). Which is a staged assessment model like CMMi, but focused at the testing process. There are 5 levels of the process from Level 1 – Initial to Level 5 – Optimising. From what Clive said, I understood that most companies would be working towards their Level 2/3 as these are the levels that require the most effort, levels 4 & 5 then offer icing on the cake so to speak (There is more to it than that but you get the idea).
Clive talked about why a client would use the TMMi model, stating it was the de-facto International Standard to measure test maturity and it is focused on moving organisations from defect detection to defect prevention.
He then talked about the features of an assessment and talked about the types of companies who had gone through the assessment so far. It was an interesting talk and was something different from the rest of the talks. I didn’t feel there was anything that I could personally take back from this, other than ask the powers-that-be about whether they were aware of this.
That was the final talk of a full enjoyable days presentations. I heard good things about the two workshops but unfortunately there weren’t any places left when I looked at going to them.
So things I will take away:
– Mutation Testing looks like a useful technique to try
– Actionable Alert Identification Technique is something which could help us cut the noise from Static Analysis
– Using SQL to map requirements to features/tests looks useful
– Belbin Team Roles Theory could be used to associate team members with roles
– Continuous Integration with Performance testing is a must!
– I would like to attend the Certified Agile Tester training. :-)