Debriefing the Debrief

Debriefing the Debrief

How an effective RFP debrief process leads to more engaged vendors and better bids
Tom Lovering

Director of Client Engagement

Tom Lovering

February 8, 2022

As an information technology company that works routinely with government customers, we spend a lot of time and energy on Requests for Proposals (RFPs). It is a constant, day-after-day, year-after-year labor of love. This work includes:

  • Surveying the anticipated business opportunities
  • Analyzing the probabilities of win
  • Choosing which projects to pursue
  • Putting together bid proposals

Undeniably, the most time consuming part of the exercise is putting together proposals. Whether we’re dealing with an integrated eligibility project, a child support application, or a department/agency website, the preparation of a custom response to any given RFP can consume days, often weeks. It’s just part of doing business. We recently shared some tips on the process in Maine Biz, How to respond to RFPs and build business in the public sector.

In the best case scenario, we receive the award, and we get to do the work. We welcome the opportunity to build and solidify relationships, and exercise our software development skills in interesting ways. That’s exactly why we’re here. In the scenario where we lose, we still try to make the best of the situation and learn some lessons so that we can do better the next time. This is where debriefs come into the picture.

Why the RFP debrief process matters

Ideally, the debrief feedback tells you what you did right, what you did wrong, how you compared with other bidders, and where you might have gone off the rails. It’s the information that enables you to improve for the future.  After any bidder takes all that time to carefully prepare a compliant and meaningful bid response, the debrief is a reasonable and modest expectation in return.

Debriefs tend to be in the best interests of the customer agencies too. They help to assure that relevant award decisions have been carefully thought through and rationalized. They also provide an opportunity for candid discussion and relationship building with the broader industry, and can additionally play an instrumental role in helping the industry as a whole to evolve over time in desirable directions. This ultimately translates into better, more competitive offers when future projects and opportunities arise.

And yet, even though debriefs are a source of tremendous value, they can be hobbled in so many ways. In many situations, they are treated as an administrative burden, even after bidders have invested their time freely. Too rarely are bidders actually able to receive the information they ideally want.

We do understand the reasons why. The ever-looming threat of a protest is usually, in itself, enough to squelch much productive discussion. The protest process can assuredly be tremendously inconvenient and arduous. While every agency strives for transparency, the more information that is shared, the more it can be subject to interpretation and could be construed as tilting the procurement decision.  Moreover, there are often confidentiality issues that could arise in disclosing certain information about other offers.

Defining the ideal debrief

We believe a reasonable and rationale debrief process can allow a fair amount of insight and learning, without putting a procurement progress at risk. The Federal Acquisition Regulations (FAR), for instance, provides a decent framework, specifying a very specific scope of information for debriefs. Conversely, there are some great things done at the State level that have not yet been adopted by the feds.

To the extent that routine debriefs still continue to vary quite widely in terms of scope and quality, we wanted to share, from the bidders perspective, the precise scope of information we hope to receive after working arduously on a public bid.

Here are the top nine items we like to see in a debrief:

1. Offers received

Assuming it is a public procurement, subject to FOIA rules, we’ve never understood why this piece of information is sometimes withheld. We ideally want to know the names of all offerers, but a simple list is fine. If this can’t be provided, at least tell us how many bids were received.

2. Scoring potential and relative factor weighting

Often, this information is provided in the RFP, but not always. We want insight into how many total points were available, and how many points were assigned to each individual sub factor. For instance, we would just want to hear something like, “1000 points, 20% experience, 30% technical solution, 30% management approach, 20% cost.” If adjectival ratings were used instead of points, we would want a clear sense of the adjectival scheme used for each factor.

3. Initial technical volume scoring

We want to know exactly how many technical points were available (max points). We also want a sense of the scored range of the initially evaluated offers prior to any presentations/discussions (i.e., the highest total, and the lowest total; a sense of the standard deviation would be quite helpful here too, but we recognize this might be a lot of ask). We also, of course, want to know our own raw technical score prior to any presentations/discussions. It would also be important to know the raw score of the winning offeror prior to presentations/discussions. If adjectival ratings were used instead, we want to know the adjectives applied for all factors, for both our own bid and the winning bid. Many states choose to fully disclose this information in a simple table, showing scores of all offers received. To the extent that ranking was possible, we want to know our initial technical ranking (meaning the ranking that we received prior to any presentations/discussions), relative to other bidders.

4. Post-presentation/discussion technical scoring

Above and beyond the initial technical scoring, we want to know how scoring was applied as a result of any follow-on presentations/discussions that occurred. Some key points here:

  • If additional presentation points are available, we want to know exactly how many additional points could be received (max allowable allocation). Within this framework, we also want to know the highest and lowest point allocations that were received within the shortlisted pool of vendors. For instance: “100 additional points, 10 points for worst presentation, 70 points for best.”
  • If presentations/discussions merely resulted in changes to initial scores, we would hope to learn the range of possible adjustment. We also want to know exactly how much our own initial score was raised/lowered by the presentation.
  • If presentations were assigned adjectival ratings, we want to know the adjectival scheme that was applied. We also want to know our own adjectival rating, as well as the winning adjectival rating.
  • If multiple presentations were received, and some manner of scoring was assigned, we want a sense of how our presentation ranked relative to other bidders (eg. 1,2,3).

5. Strengths and weaknesses of technical offer

Above and beyond mere scoring, some of the most meaningful feedback we receive during debriefs comes as a summary explanation of the perceived strengths and weaknesses of our offer. When we can get it, this information is incredibly helpful, as it enables us to focus very precisely on the elements that matter – instead of wondering why we received a particular score. There might be a few ways to get this done. One possibility is to show us our own evaluation sheets, or at least our own scoring for sub factors. Often this is enough to identify areas where we were more and less appealing. Alternatively, a verbal or written explanation is always equally welcome.

This is an area where a lot of customers could improve their debriefs. Even customers that do offer insight in this regard tend to do so somewhat inconsistently. If you are going to identify a strength for one offeror, it should theoretically be a weakness for another. If we missed something that another offerer suggested, this is quite important. We recognize that there might be some confidentiality issues involved, but we should be given some reasonable sense of where we failed to optimize the approach. When dealing with a government project, it is, after all, ultimately public work that is being done, so transparency in this area benefits us all.

6. Cost comparison

Cost is obviously one of the most important parts of any bid. At minimum, we want to know the highest and lowest offers received, as well as the amount of the winning bid. We also want to know our cost rank relative to other bidders. If pricing was adjusted during the bid process, we would want this information for each step in the process (before and after presentation, before and after BAFO, etc.). Again, many states are able to provide this level of detail in a simple tabulated format disclosing results for all offers received. To the extent a fixed formula or approach was used to score the pricing, we also want a clear understanding of the formula/process used.

7. Competitive range and shortlist decisions

Organizations can use a variety of different factors to shortlist vendors. It could be based on budget cutoffs, or ranking, or whether certain approaches or technologies were used. We appreciate receiving a sense of the applied rationale, and a clear understanding of why we were included/excluded. If there were specific cutoffs or thresholds, we want to know what they were. We also want to know exactly how many vendors, in total, were shortlisted or deemed to be in the competitive range.

8. Composite scoring

In many bid processes, a final composite score is calculated for all offerors. We want to know the scored range (highest and lowest assigned scores), as well as the score of the winning bidder. We also want to know our final composite ranking. Again, many states are able to provide this level of detail in a simple tabulated format disclosing results for all offers received.

9. Other administrative feedback

Above and beyond the actual scoring, it is helpful to receive feedback regarding the actual bid package. Maybe there was a specific piece of information that was especially hard to find in the submission, or maybe a particular graphic was exceptionally good or exceptionally confusing, or maybe something about the organization of the offer was not quite right. This type of feedback from evaluators can be quite helpful so we can provide a better format in the future. 

Final thoughts on improving bid outcomes

While there are many reasons to be sensitive and protective when it comes to public procurement information, best practices across government institutions suggests there is a way forward where bidders can be afforded reasonable insights into award decisions, without risking protests or other litigation. A lot of states are providing the referenced scope of debrief information already, just not always in such a systematic way that provides clear insight into each step of the evaluation process.

Beyond merely being a matter of goodwill and a manner of reasonable return for invested efforts and energy, effective debriefs are vital if the government wants software developers to provide good and meaningful offers in the future. Only with the right information can we innovate responsibly and optimize our bids to support public project/program operations.

As a concluding bonus thought, I would dwell for a moment on the concept of “quality offers.” Obviously, this is something every procurement wants to achieve. In the long run, debriefs are very important in this regard, but are only half the battle. The other half is what happens up front, with the release and administration of the RFP. To assure that good offers are received, it is equally important for government clients to take other sensible steps, like writing clear project specifications, and even publishing reasonable budget ranges (this last item being unpopular, but extremely effective in assuring realistic offers that are actually competitive).

We could go on, but perhaps this concept of “qualify offers” is an issue better left for another blog. The point here is really that we value the feedback we receive from customers, and want our customers to know which points of information are most helpful.

Please pass this along to anyone who can make a difference!

Learn more

Was this post helpful?