TDD - The exponential and magic return on investment - Test Automation
"I don't think these new Quality Assurance practices will work in our enterprise architecture environment HPQC is the top of it's game and we have the best QA staff around"
It's not an un-common response among very good developers and testers, architectures and brainiacs - I was just lucky to have someone willing to measure the performance data to prove their point ;) Lucky for me, that pessimist proved my point and became a wonderful agilest.
Initially I kicked off the conversation by introducing the 'left of field' concepts in early agile teaching sessions. I shared readings and videos as a knowledge baseline while negotiating for a protected & dedicated agile pilot team. The foundation of knowledge proved beneficial as it picked at their curiosity and provoked some fantastic debates about what is quality, and what is actually worth testing. Like most waterfall teams, testing cycles are all about proving the requirement was done according to a specification documented last....sometime long ago - a beneficial method when you're developing chemical medications; however, no one asks in that waterfall cycle did we actually do what the customer wants???
Enter TDD & Test Automation and why the developers should test first and the testers should write scripts!!!
Why swap roles and learn what the other does? Quite simply because it's often initially painful to do something so different; think like a tester, write unit tests up front or think like a developer and white instructional scripts to run. That pain is the coaches best friend because each member will want to get through that pain as quick as possible. As a coach you have a balancing act to perform to ensure that you push the training so they see rewards but not to push too hard that they get frustrated and give up.
Once the knowledge foundation is set, the brief given, and the iterations start; get the team to set their benchmarks to achieve and the rate at which they'll achieve it. For example, aim for a 30% code coverage by Release 1, and 30% pass rate on automated tests. May sound like low odds to set but it's a very realistic one for a bunch of 10 guys who have been working a specific way for 10 years, under a lot of delivery stress and trying out a completely new way of working.
Don't forget you need to configure your test suite & your build tools, ensure your environments are configurable to match - even some teams won't have a single repository to use, and you'll leverage your learning's by ensuring your tech lead (who you don't want to count as a capacity to code) and your ops manager are investing in continuous integration at the same time.
So the team is going for 30% code coverage by release 1. So for them that gives them a chance to pick and choose what elements to do TDD on, and to automate. So 1 in 3 tasks will be a learning task - don't forget to advise on that learning complexity when the team is poker planning.
Here's some achievement notes - from the team itself - for a few of the first sprints to show the progression (note - first 2 sprints (not including Sprint0) were 4 weeks, 3rd 3 weeks, from 4th they went to 2 weeks)
Sprint 1: (xxxxx = software component or brand)
soapui integration into bamboo, providing automatic Enterprise Service testing
testers writing tests for soapui - using SoapUI Pro
creating stub services of the Internode Atomic services (including xxxxx and xxxxx) allowing XXXX services to be tested
expand knowledge on the Enterprise/Domain/Atomic services produced by xxxxxx
Learnt how to use soapUI to test soap calls
Initiated pilot new integration testing
We are well underway in integration testing xxxxxxx per xxxxxx (45% coverage).
Testing of soap calls to the various atomic/mocked services and database was conducted in xxxxxxx own development environment to verify that xxxxx deliverables can be properly integrated at the Enterprise business level
Results of our testing are committed to SVN which is used by Bamboo to run automated test builds
This means no regression testing is required for soap calls as it is now incorporated into Continuous Integration.
Also worth noting is the drastic reduction in testing at the UI level.
Results of our soap testing can also be re-run for the purpose of testing at the DMZ level in future sprints.
This is because soapUI projects are saved in xml files and can be reused whenever required
Sprint 3: here is the actual #s showing productivity improvement
The % improvement of this team over the particular functions is not dissimilar to what a lot of teams experience. I recommend establishing a baseline of performance data to test productivity AND to build evidence for investing in automation in other areas. Having this data helped us with purchasing orders for automated environment roll up software, gave the team negotiating clout when they recommended investing in re-factoring or weighing up release dates, and especially providing the case to the business to further invest in training and rolling out agile across all teams.
Of course regardless of all that, the best thing I gained as a coach was the satisfaction that i'd educated and changed someones professional satisfaction - beyond my expectation. The same doubting tester later said to me...
"I now understand what the agile manifesto means when it says Individuals & Interactions over process & tools.... it's not the process driving people, it's the people driving the process"