Why over-analysing your requirements is doomed for failure
Had an interesting discussion with one of the largest Government departments in the UK last week. Since the BREXIT vote and the change of PM, their department had changed and is now combined with another.
This has thrown up many challenges. Like any organisation that has gone through some kind of merger there are multiple systems all doing the same thing so some kind of consolidation needs to take place, but it's not just a systems issue it's also a people and process issue.
As the BREXIT result was unexpected and the changes that took place immediately after were unplanned this has meant that the new department is facing at least 18 months of analysis/decision making and change.
The problem is that the organisation can't stop performing for 18 months whilst this happens. Decisions still need to be made, spending still needs to be accounted for, performance still needs to be measured.
Talking to them about Corporate Performance Management their initial position was, we need to step back and understand what all our stakeholders need from a single solution, how the new operation will be measured and what information they need to run their business. Only after that would they then be able to consider how this could be delivered. They believed this was going to take at least 6 months to determine. (Typical waterfall approach to systems thinking).
Interesting, but not the best way to approach this. Their mindset on how this needed to work was predicated by their experiences with their current data warehouse implementation. This massively complex programme needed to have all of the data needs well defined in advance so that the appropriate data structures could be developed to support those needs. Unfortunately like the majority of datawarehouse programmes/products any changes after it has been built requires a whole heap of pain and effort to redesign, even for the smallest change.
So I asked them the question if you were not restricted by the need to have everything scoped would you still approach the challenge in the same way. There were a few nods of recognition that actually a more flexible approach could work, but still the majority were more comfortable with the "age-old" way of developing systems.
So I then asked them, in their experience how many times in the past have their users come back after receiving the new system to say it wasn't quite what we needed, and another round of analysis was required. Lots of nodding of heads.
This is the problem with asking the users what they want, most people expect the users of the current system to know what they need from a new system, when typically they don't. What they do know is what they already do and that is why with a new system, if you ask existing users what they want and build it to their specification the new system will perform exactly the same way as the old one did.
The problem here is that users "do not know what they don't know".
This is why an agile approach to implementation is far more effective. The "little and often" release of system functionality to the users is far more effective than the "big bang" approach. This frequent release cycle enables you to show quick wins to your stakeholders and in parallel enable your users to evolve their requirements giving them the system that they really wanted in the first place but didn't know it at the time.
The takeaway from this story is, the more flexible the system is that you implement the more likely it is that your users will embrace it as they can see that their input is having an effect and any changes are available within 6 weeks not 6 months.