In the past our developers and dedicated testers have worked fairly independently. I think this behaviour originates from a time when they were different teams with their own managers. Once a story has been completed by a pair of developers it is passed to QA for testing who use a mixture of scripted and exploratory testing. Introducing the Kanban board has made us all more aware of what each other are doing and acutely aware of when there is a backlog of items going into QA. Developers are keen to get their code tested as soon as possible incase any issues are found.
We decided to have a retrospective focused exclusively on the collaboration (or lack of) between developers and testers in the hope that we could identify areas where we could make both of our lives simpler, reduce waste and lead time.
First question: Do we only consider our own function rather than the best way to get the story to DONE?
I asked the team to write down a score ranging from 0 if you only consider your own function to 10 if you always consider what is best for the whole process. Answers ranged from 3-8 and we each gave a brief reason for why we scored the way we did.
Next question: What is the best way for QA to discover what needs testing?
We all agreed that getting the developers who developed the story and the tester that was going to QA it together before testing begins, to show what has been done and discuss the risks would be best. This will also provide an opportunity to discuss the unit tests we have done. We’ll also try to involve QA more in the UX design process.
Next question: How can we shorten the time from end of functional development to release?
I should have worded this one differently, suggesting shortening the time put everyone on the defensive. “Make best use of” would have been much better. Currently we have a period of about 2 weeks from the end of feature development to release. But with the progress we have made automating installs the things we need to test during these periods have completely changed. We’ve agreed to get together before the next release to plan this further.
Last Question: What should happen if a change requires a large regression test?
If developers make a sweeping change at a level that effects the whole product should we ask QA to do a complete regression? We would have a set of automated regression tests but its not something we’ve managed yet. When this happened recently very close to release all the developers spent the day doing exploratory testing and we all agreed this was really useful and a good learning experience for all.
So how did it go? Well we identified lots of areas where we can improve our process by collaborating more. Only time will tell how well we adopt the new ideas. The preprepared questions to discuss in the retrospective seemed to work well and kept the dialogue focused. We usually do retrospectives in the office but this one was remote with the questions on shared slides, I don’t think too much was lost because of this. What do you think?