Article Snapshot

I’ve been fortunate enough over the past few months to be involved in a number of high-level Test Maturity reviews for several organisations, and found myself asking the question “why do we test?”.

A very simple answer may look like “We test to ensure that the application under test works as expected". So often we fail to define exactly what “expected” looks like and we don’t come to a common understanding of what we are producing and what we should be testing.

One of the reasons, I have seen many times, is that we don’t come to a common understanding of what is expected because we “don’t have time”. We have all seen what happens next; the project is de-railed by applications that, all too late, don’t meet expectations – which of course were never understood – with deadlines missed, budgets blown, customers disappointed.

If we were to pause for a moment and define these “expected” outcomes, what could the final outcome possibly look like?

  • Shared understanding of expected result
  • Clarity on what is or isn’t a defect
  • Defects and changes caught earlier
  • Re-work reduced and ultimately,
  • Projects delivered on or close to Go Live dates

I’m not suggesting a ‘plan everything upfront before being built’ waterfall approach, but more of a collaborative approach between those members of the team that need to be involved. Something similar to the concept of the 3 Amigo’s; where BA’s, Developers and Testers work together to gain a shared understanding and agree on how they know when something has been done correctly.

I’ve worked on many projects where a solution is developed by a 3rd party. The developer produces code based on their understanding, the code is unit tested by the developer and then tested again by the vendor’s test team. Each pass with flying colours, only to be delivered to the client to fail dismally.

You ask yourself the question “Why?”. You jump to the conclusion that ‘they’ didn’t test it properly. But more often than not, it comes down to a misunderstanding of what was actually required. Taking the time up front to communicate amongst the team, the BAs, Developers, Testers and the Business would have given a much better chance of common understanding leading to a better view of what is to be delivered and when.

The cost associated with these misunderstandings is evident in Boehm’s Law.

There isn’t a “one size fits all” model but perhaps a clearly defined set of principles we adopt may help us along our journey. So, next time you find yourself testing in a world where project members are clearly not communicating to get to a common understanding and where there is not enough time to elaborate the requirements, consider promoting a collaborative requirements gathering, involving all the team and a JBGE (Just Barely Good Enough) approach.

This isn't an excuse for doing a poor-quality job because quality is in the eye of the beholder: the audience for an artefact determines if it is sufficient, not the creator of it. If your stakeholders require an excellent, detailed artefact then it is JBGE when it reaches that point of being excellent and detailed.


Given the promise of Agile, can we adopt similar agile-like practices into waterfall or hybrid projects, what do you think?

In the meantime, when you are faced with a project that hasn’t got the time (or other excuses) for requirements gathering, ask yourself in the words of Dirty Harry “Do I feel lucky?” (Well do ya….?). Because you’re going to need luck to get this one done on time.

Mobilise Your Dream Team

Hire exceptional talent in under 48 hours with Expert360 - Australia & New Zealand's #1 Skilled Talent Network.

3.5k
3500+ clients trust Expert360
What kind of Expert do you need?
No items found.