Comparing Software using Peer Reviews

imageThe goal for a business software peer review site is to provide assistance to a buyer in researching and selecting an appropriate software solution (or at least it should be). There are many other uses for the review data once collected, and those uses may be a part of the way the site is funded, but the experience on the site should be optimized for the two key audiences, the reviewers and the buyers. The reviews themselves are the “gold”, but either end of the process, acquiring the reviews or using the reviews as an input to the buying process isn’t simple. Acquiring reviews is part science and part art, and is both important enough and complex enough to get a separate post at some point in the future. For this post I’ll focus on making the review data useful and actionable to the buyer.

The software buying decision making process can be complex and varies by company. Just by the nature of the process it’s necessary to build a list of potential solutions and then to compare them in some way to a list of business and technical requirements and to each other. Everyone has their own mental model on what a comparison should include but there are ways to facilitate the analysis of the information by presenting it in ways that can be easily consumed.

When you’re looking at business software what are the questions you need answered and how can you get to those answers? The questions generally fall in three categories; fit, value and vendor market presence. Some of the questions to ask:

Fit

  • Do you have a clear understanding of the business needs? (What problem(s) are you trying to solve and why?)
  • Do you have a clear understanding of the technical needs?
  • What features and functions are important for meeting needs?
  • Which products appear to be a good fit?
  • Does a specific product have the features and functions required to meet the business and technical needs?
  • How do the products compare to each other?

Value

  • How are the products licensed and priced?
  • What are the prices of the products of interest?
  • How do the prices compare by product and feature match?
  • How quickly can the products be implemented?
  • How much outside (consulting) assistance is needed to implement the product and get the best fit possible?
  • What is an average total cost of ownership (TCO) for the products of interest?
  • How satisfied / dissatisfied are the customers of the vendors of interest?
  • What product problems and issues are the users of each of the products reporting?

Vendor Market Presence

  • How financially stable is each of the vendors of interest?
  • How does the reputation of each vendor of interest compare?
  • What is the relative market share of the vendors of interest?
  • What is the relative market momentum of the vendors of interest?
  • How does revenue and margin growth compare between vendors of interest?
  • How do the employees rate each of the companies of interest?
  • What online presence does the vendor maintain and in general how transparent are they on operations, pricing, support, etc.?

This is only part of the many questions that need to be answered, but it does give you some ideas as to the kind and detail level of the information required. Getting value from peer review sites directly correlates to answering some of these important questions. I looked at the buying process in the post here, so I won’t rehash that now, but in that context most of the related activities with the review data happen as a part of “discovery” and “research”.

Peer review data offers the capability to develop unique insights and support some unique ways to evaluate and rank products and vendors. One of the most valuable attributes, beyond the ability to connect to direct feedback about products and vendors from individuals with direct experience, is the real time and self updating nature of the process. Many other ranking and evaluation tools are based on a process that require a long window to update and so are difficult to keep up to date, which is particularly important in “hot” categories that address high value business issues in situations where the technology is evolving rapidly. One of the more important attributes of cloud applications is the short cycle update release schedules that cloud vendors have adopted. That means that new or changing business problems can be address with software updates in a matter of days or weeks. The potential value of understanding and evaluating these new capabilities is very high and something that many businesses today would see as high priority. If the evaluating tool doesn’t have the same short cycle release and update capability then it can rapidly get out of sync with the realities of a changing category solution set. In other words if your evaluation tool is only updated every 2+ years and the process of the evaluation takes 6+ months it is unlikely that it is current to what is actually important and relevant in the software that is being evaluated.

The review data collected by most peer review sites is very robust and is generally presented in various ways to support different levels of consumption and evaluation. The “raw” review data itself provides detailed and unfiltered insights and is valuable in assisting the evaluation. Beyond that though, the data is presented and enriched in many ways. From a presentation perspective the data can be analyzed with standard or custom business intelligence tools, and can also be incorporated into reports that provide deeper analysis.

The data varies by review site, since it is collected based on the sites review questionnaire. The more data collected the more useful the reviews, up to a point anyway. A reviewer will only tolerate a reasonable length for the list of questions so it is important to optimize that questionnaire to collect as much data as possible while avoiding reviewer fatigue. The data should include:

  • A way to easily tell if the reviewer’s identity was verified and that the reviewer is (or is not) a current user of the software being reviewed.
  • General information about the reviewer and company (role, name if provided, company and industry, company size, etc.)
  • Date of the review and dates that the product was used including where used if different from the reviewers current company.
  • Overall rating, this is often couched in the Net Promoter Score question, “how likely are you to recommend”.
  • A summary opinion of the likes and dislikes with some more detailed back up (what do you like / dislike most).
  • Business problem(s) that the software is solving.
  • For what specifically are you using the software.
  • Recommendations for others considering purchasing the software, this can be open ended and relate to selection, deployment, use and any other “advice”.
  • The most / least important features for the reviewer.
  • Satisfaction with the product direction / strategy
  • A rating of satisfaction for several important product factors including how well the product meets business and technical requirements, how easy / hard to administer product, ease of set up, ease of use, etc.
  • A rating for satisfaction with the vendor for ease of doing business, quality of support, product updates, etc.
  • A detailed evaluation of the most important features.
  • Implementation details, how long it took, was there outside assistance / consulting from the vendor or other provider, and deployment method.
  • Pricing information if available (this is generally hard to collect as many reviewers aren’t involved in the business details of the software purchase).
  • Some assessment of user adoption and return on investment (again this is difficult to collect since pricing may not be transparent).

Many review sites have some detailed evaluation report that presents the data in a way to facilitate comparison of the products and vendors. As a demonstration of a comparison report and also the potential data enrichment process I’ll use one of the standard G2 Crowd reports as an example, referred to as a Grid report. I won’t reproduce the entire report, but link to an example of the comparison graphic and then talk about the information and analysis by report heading.

Satisfaction is fairly straightforward, it’s the total of all the “satisfaction” questions normalized for all vendors. It also includes some more detailed satisfaction questions that relate to usability, specific functions, and implementation; as well as some weighting for number of reviews and how recent the review was submitted or updated.

Market presence in our grid methodology is a somewhat more sophisticated algorithm derived score. I won’t share the actual weightings of course, that’s a part of our “secret sauce”, but the general input types are posted on the site to add to the transparency of the process. Inputs include things like growth, financial stability of the company, revenue, market share, employee satisfaction, social presence and several other bits of available information.

That gives you some idea of how a peer review site could help buyer’s get to a decision. The amount of influence that the information has with the buyers depends on a lot of factors. Things like company policies, personal style, individual trust of different information sources, size of company, etc. can all play into the decision to use, and how much to use, the review information.

Leave a Reply

Your email address will not be published. Required fields are marked *