Technology selection can be a fraught and intimidating process. Having experienced numerous selections as vendor, consultant, and customer, I have seen at first hand how clear decision making can be undermined by some common process mistakes.
Selections of any reasonable scale normally involve an RFP (request for proposal) process where vendors are invited to participate in a beauty parade based on your list of requirements. Their responses can be fed into a scorecard that should allow you to understand the trade-offs involved in any decision.
` The intention is to provide objectivity and transparency, but it doesn’t always end up this way. Vendors are usually far more experienced at negotiating these RFP-style processes than most customers. It can be difficult to act as a genuinely informed consumer that is asking the right questions. This can give rise to some common mistakes that can undermine the desired outcome.
Long lists of requirements
This is probably the most common mistake of all. When writing requirements, it’s very easy to get into the mind-set of not wanting to leave anything out. This can lead to a bloated list that does allow you to meaningfully distinguish between different vendors and solutions.
This requirements bloat can act as a red herring in the selection process as you are not focussing on what you really need. It can lead to the selection of larger, more expensive platforms that attempt meet every single requirement in the list, rather than a cheaper, more focussed alternative that meets your core requirements.
Prioritisation is key for any requirements list. No platform will meet every single requirement exactly. The trick is to determine which requirements are the most important and spend time seeking to understand exactly how can be met by each platform.
Generic or “check box” requirements
A requirements list that is littered with generic, “check box” style questions won’t do much to help you distinguish between solutions. Most vendors have developed a stock response to this kind of requirements list and will be able to address them all.
For instance, asking if a vendor provides an API based integration is a bit like shooting fish in a barrel. The answer will always be a firm “yes”. What matters here is what sort of integration scenarios you want to support and what flavour of APIs you would prefer to work with.
Worse still, long spreadsheets can reduce the entire selection process to a repetitive grind that will only elicit a plain statement of compliance from vendors. You are likely to gain more insight by adopting a user story-based approach that seeks to understand how a platform meets a requirement. A more narrative style of requirement writing that’s rich with use case detail can help to provide valuable context and elicit a more meaningful response.
Obsessing over scores
If you are using a scorecard approach to rate solutions, you shouldn’t allow the idea of a “final score” to dominate the decision. Any technology selection will inevitably involve a trade-off between different features. This is not necessarily reflected by a scoring system, no matter how sophisticated.
Scoring systems tend to be highly subjective, and you can manipulate them to yield pretty much any outcome. Requirements tend to be of unequal granularity, which creates distortions in a scoring mechanism that are difficult to correct through weightings. For example, concerns over information security requirements may overshadow anything else.
Despite these shortcomings, a scoring mechanism can help to highlight areas in which a particular solution excels, but it should be augmented by a discussion of the trade-offs involved in any selection. This is not the same thing as awarding the prize to the highest score.
Assuming vendors will always want to respond
It takes a lot of time and effort to respond to an RFP and take part in the ensuing pitch process. Vendors will only respond to an RFP if they think there is a genuine chance of a sale. They will want to see a set of requirements that they can reasonably match. They will not be enthused by a set of generic, “check box” style questions that make it difficult to differentiate between vendors. They will also want to see a reasonable quality process that is coherent and reasonably transparent.
Allowing yourself to be consultancy led
Bringing a consultancy in to help with the selection can be a bit of a double-edged sword. They will bring knowledge and experience to bear that can be invaluable in understanding the marketplace. They will also tend to influence the process in a particular direction, consciously or otherwise.
A consultancy may also be invested in a clear outcome, particularly one that involves a decision to buy. They may even be angling after any resulting implementation work. Either way, they will be an extra interest group that can complicate decision making.
Succumbing to scope creep
Given that most enterprise vendors offer overlapping capabilities, it can be tempting to start expanding the scope of any selection exercise. This can quickly turn into a rabbit warren of complexity. It can also encourage the adoption of larger and more complex solutions that are less likely to be implemented successfully. A well-defined scope is easier to defend, so start by clearly identifying the problems that you’re trying to solve and the capabilities you want to focus on.
Over-focusing on technology
There can be a tendency to focus too heavily on the technology aspects of a solution at the expense of implementation concerns. It’s important to bear in mind that you are also choosing a software partner whom you could be working with for several years.
You need to consider who you might be working with, what their implementation approach is, how they manage projects, and what their post implementation support looks like. You could choose the right technology, but you’ll still need a good implementation to achieve the right outcome.
Pre-selecting a platform
Some procurement exercises are just a process of rubber-stamping a pre-selected platform. Don’t do this – it is a monumental waste of everybody’s time. If you have selected a platform, then take the time to plan a good implementation rather than burning time on a pointless selection exercise…
You’re not buying a pitch
I’ve never been quite sure what pitch and demo meetings are for, other than as a means of torturing vendors. They aren’t even a good way to meet the people you’ll be working with. It’s easy to feel that you have fallen victim to a “bait and switch” routine as the hugely personable and knowledgeable pre-sales team gives way to a less agreeable implementation and support operation.
If you’re a fan of generic PowerPoint slides and well-rehearsed demonstrations, then pitch meetings will be a highlight of the selection process. It’s just that for the most part, you won’t learn much about what the platform is like to work with. To do that you’ll need some hands-on experience, or better still, the opportunity to talk to some people who have worked with each vendor. That will tell you more about the reality of what you’re buying than any pitch.
Forgetting that “none of the above” can be a valid outcome
You may find that no platform really addresses your use cases. Sometimes, the right result of a selection exercise is not to proceed with any vendor. You may find that use cases are better served by splitting up capabilities into a series of smaller, more targeted solutions. You may even decide that your requirements are too specialised to be addressed with a commodity solution.
This requires some courage, especially if you’ve invested a certain amount of time and energy into a formal selection process. It can be tempting to regard the lack of a clear preference as a failure. In fact, this should be regarded as a positive choice. It suggests that the process has been rigorous and honest enough to demonstrate that none of the solutions are appropriate.
Filed under Architecture and Strategy.
Content retrieved from: https://www.ben-morris.com/how-not-to-run-a-technology-selection-process/.