One of the most awkward interactions that I have on sales calls these days arises out of an apparently benign question that I ask.
“How many reviewers will you have for each blog post?”
Here’s what happens next, depending on the response.
- When the prospects answer with one or zero reviewers, the call continues without a hitch.
- If the answer is two, I call that out as a risk and also say that we’ll probably only engage with our most expensive model.
- When it’s three or more reviewers, I pass on the business.
You might wonder why I do this. After all, the optics are seemingly terrible for us.
It appears as though I’m trying to hide our authors’ work from too much scrutiny. But, I assure you, that isn’t the case. With more than 2,000 published posts and page views numbering in the tens of millions, I’m not too worried about the 10,000,001st or the 10,000,002nd reader.
Rather, I don’t want Hit Subscribe to participate in an unsuccessful content program. And with that many reviewers, you will almost certainly have an unsuccessful content program.
Since that’s a bold claim, I’m going to dive into it in detail. I’m going to back it with a lot of data, and then I’m going to explain, based on my experience, why I think this happens. And finally, I’ll offer less damaging risk mitigation strategies than choking the life out of your content program with multi-party reviews.
Since there’s a lot of content here, I’ll provide an outline, in case you’d like to skip directly to the “why” or the “what to do instead” :
- The Data about Content Reviewers
- Why Is Multi-Party Review So Inefficient?
- Quantifying the Damage of Reviewer-Induced Inefficiency
- Risk Reduction without the Carnage: Some Tactics To Try
- Kill the Bureaucratic Review
The Data About Content Reviewers
To start off with, let’s get to the actual data as quickly as possible. But to meaningfully do that, I need to speak a bit to methodology here.
Methodology
Hit Subscribe is in the content business. Historically we’ve sold blog posts, but increasingly, we sell and consult on content programs holistically. And because of that, we have tons and tons of data over the years from prospects, clients, and industry contacts.
As part of our discovery process, we pretty much always discuss the desired cadence, so it’s one of the richest and most reliable data points that we have. So for this case study, I examined everyone for whom we had a stated, desired content cadence. I normalized that in units of “posts per month.”
From there, I included only contacts for whom we also knew enough about their review process to understand the number of reviewers. With these two pieces of data, measuring actual cadence was fairly simple.
So, some quick notes on methodology:
- This is obviously anonymized, so I won’t go into demographic specifics about any companies.
- The data includes everyone for whom we knew both desired cadence and review process.
- It includes a mix of clients for whom Hit Subscribe fulfilled posts, pure strategy engagements where they fulfilled elsewhere, and prospects/industry contacts with whom we never worked.
- I discounted posts orthogonal to the stated purpose of the desired cadence. For instance, a post announcing the hiring of a new CFO wouldn’t count when the stated goal was organic traffic.
- Reviewers here are people empowered as gatekeepers, so it isn’t a “two reviewer” situation if a single gatekeeper simply shows the post to someone else.
The Results
Let’s now take a look at the results. I plotted three different graphs, respectively:
- The number of pieces of content desired per month as a function of the number of reviewers.
- The number of pieces of content actually created per month as a function of the number of reviewers.
- And finally, the goal completion percentage as a function of the number of reviewers.
Here’s what that looked like, in detail.
1. Desired Posts Per Month
The main point of this graph was to establish a baseline. When looking at how much content the site actually produced, we of course needed to normalize against how much content they actually wanted to produce, if we’re going to talk about efficiency.
But one curiosity here is that the single reviewer average was the lowest. I would have expected the desired cadence to decrease a bit with each additional reviewer, if for no other reason than the expectation that each additional reviewer ostensibly creates additional “quality” on the “quantity versus quality” spectrum.
I don’t want to get too far into the business of pure speculation, but I think something that partially explains this is operational size. Simply by default, larger businesses will tend to want more content, and larger businesses also tend to throw more humans at problems.
But let’s put a pin in the organizational theory for now. I’ll channel my management consulting background into some hypotheses a little later in the post.
2. Actual Posts Per Month
Now let’s take a look at how much content companies actually produce, as a function of the number of gatekeeper reviewers.
I’m going to list the actual numbers here, just to accentuate the trend line:
- With no reviewers, companies posted an average of 8.0 posts per month.
- Add a single reviewer, and that production shrinks to 4.5 posts per month.
- With two reviewers, we’re down to 2.9 posts per month.
- And with three reviewers, the number shrinks to a vanishingly small 0.4 posts per month.
Setting aside desires for the moment, this has a geometric kind of curve to it. Very loosely speaking, it feels as though each reviewer you add to your process cuts your throughput in half.
3. Efficiency as a Function of Stated Goals
At this point, we’re really into the thick of it. How did companies do against their own stated goals?
Well, here’s a visual:
Recall that with the last graph, adding that first reviewer cut content production in half. But also recall that companies with a single gatekeeper tended to have significantly lower expectations from a volume perspective. So despite having half the throughput, the single gatekeeper situation was still quite efficient.
It’s a different story, however, with two and three reviewers. Those businesses are a content efficiency bloodbath. Here are the actual numbers:
- 100% efficiency with zero reviewers (I believe it was actually higher, but I capped at 100).
- 98.7% efficiency with one reviewer.
- 46.5% efficiency with two reviewers.
- 7.0% efficiency with three or more reviewers.
Do you see now why I have my policy of “zero or one reviewer is fine, two is a risk, and three is a no-go?”
It’s because I don’t want to take your money to help you fail.
Why Is Multi-Party Review So Inefficient?
Now is the time for the hypotheses that I mentioned earlier.
I use the word “hypotheses” very deliberately here, given its relationship to the scientific method. As I mentioned previously, I spent years working as an independent management consultant, and I always viewed the essence of management consulting as applying the scientific method to business. What follows are hypotheses based on observations of the world, but not actually confirmed with experimental data (yet).
So while I feel fairly confident in these hypotheses, understand that your mileage may vary here. We’re not talking about Newtonian mechanics.
Still, there are a few things that I think neatly explain the multi-reviewer slowdown, in my experience.
1. Simple Logistics and Throughput Bottlenecking
The first consideration is mundane, but non-trivial. Having multiple reviewers creates simple logistical slowdown, and more than you’d think at first blush.
With zero reviewers, there’s obviously zero slowdown, but there isn’t much more with a single gatekeeper. The gatekeeper receives the content, takes a read, and hits publish.
Add a second reviewer, however, and inboxes enter the fray. This, of course, creates lag time in both reviewing the content, but also in timing and aggregating the feedback. Multi-party reviews rarely occur in parallel because the reviewers quickly learn the headaches of redundant or conflicting feedback.
And so the process elongates in calendar time.
But another important consideration here is who is doing the review. The primary gatekeeper is typically someone occupying the role of a content marketing manager. This role is dedicated to and responsible for content.
The other approvers, however, tend to sit in other business functions, sometimes wholly outside of marketing. And these people (sales staff, engineers, execs) tend not to view the blog as a particularly high priority. So the process elongates further.
I’ve seen more than two months elapse between content submission and approval, even with absolutely no suggested changes or comments.
2. The Book Club Effect
Alright, now let’s roll up our sleeves and get into organizational realpolitik a little, shall we?
After all, any situation with more than two people creates politics. And the second reviewer serves as that third.
Imagine going to a slapstick comedy movie with a friend, and feeling surprised by your enjoyment. You know it’s stupid, but Adam Sandler farting and taking footballs to the groin actually makes you laugh.
Now imagine you and your friend leave the theater and a third friend asks you both what you thought of the movie. Do you blurt out, “high cinema! Oscar-worthy groin gags!” Or do you hesitate a bit, worried about looking like a rube?
Reviewing something on your own is a subtly different ballgame than reviewing it as part of a panel.
Perhaps nothing embodies this more than a book club, where the entire purpose of the activity is a critical review of the book. In a book club, whatever you collectively decide, the verdict will never simply be five or six people all saying, “yep, that’s a book,” and going home. The purpose of the book club is to really get into the weeds of the book.
Likewise, the purpose of a panel review in a corporate context is to really roll up your sleeves and get into the thing. This is true of code reviews, RFPs, budgets, etc. And now, in your organization, it’s true of each and every single blog post.
When you enlist reviewers to provide feedback, they WILL provide feedback, whether it helps or not. And whether the feedback helps or not, it will definitely do one thing—slow you way down.
3. Enlisting Your Critics as Reviewers
The book club effect will slow you down no matter who you invite to join the club. But, to make matters worse, it’s pretty common to invite the most critical people you can possibly find.
Why is that? Well, there’s actually an entirely benign and understandable explanation. Let’s look at this in the world that Hit Subscribe occupies—the world of content for software engineers.
Say an early stage developer tools company hires a content marketing manager to generate content—probably someone with a freelance writing or journalism background. Predictably, they try the following for content:
- Write it themselves, but realize they “aren’t technical enough” to resonate with the techies.
- Try to get the company’s engineers to write it, but the CTO firmly puts the kibosh on that.
- Go find freelance engineers on Upwork and ask them to write blog posts.
- Suffer through a lot of ghosting, direct outreach, and other bad noise finally to find one that agrees to write an article and then does. A good one, with actual Java code in it.
They hit publish on the blog post, but their time basking in the glow of success proves short-lived. An irate internal engineer comes to them asking how they could publish such utter crap on the blog.
Who builds microservices in Java! That’s a creaky old language for creaky old farts, and this post should have been in Python or Rails or at the very least Scala, if they have to be on the JVM.
The content marketing manager has no idea what to make of this feedback. But the course of action is clear:
Hey, Surly Steve. My bad. Super sorry ’bout that, and I’ll take it down. Would you be willing to review every submission from now on?
This is a perfectly rational thing to do. But now your content marketing group has been optimized to do two things: placate Surly Steve and prevent the content marketing manager from being yelled at. Notice that neither of those include the words “marketing qualified leads.”
And calling “making Steve happy” and “internal yelling per post” vanity metrics would be generous.
4. Creating Serious Contributor Attrition
So now your content operation has empaneled an array of people for quality control. Your freelance engineers submit blog posts and this happens:
- The content marketing manager does a proofreading and copy-editing pass with red pen.
- The product marketing manager ensures that everything aligns with brand messaging and product positioning.
- A software engineer performs a “code review” of sorts on the post.
That all seems fair enough.
But let’s look at it from someone else’s perspective. Let’s look at it from the perspective of the freelancer who, by the way, has six-figure earning potential, is gainfully employed, and is doing this in their spare time, instead of playing games with their kids.
Your quality control is a recreation of the worst moments of college and their early career:
- An English teacher nitpicking their diction.
- A marketer telling them their content needs to include more shilling and schmaltz.
- Someone they probably think doesn’t know what they’re talking about conducting a code review.
If you were lucky enough that this freelancer didn’t ghost you during the recruitment process, rest assured they will after one or two of these review processes. I’ve seen this in countless situations not involving Hit Subscribe’s fulfillment. And I’ve seen it when it does involve our fulfillment.
I have to go back to clients with review processes like this and say, “we don’t have anyone that’s willing to write for you anymore.”
Quantifying the Damage of Reviewer-Induced Inefficiency
So reviews take longer, reject more things, and burn more contractors. So what?
After all, isn’t quality over quantity the order of the day? Wouldn’t you rather produce one excellent post per month rather than four mediocre ones?
Well, here’s the rub. As I pointed out in a post on my consulting blog, quality, unless otherwise defined, tends to be both subjective and intangible. You can’t quantify the purported “brand damage,” or whatever, that results from Surly Steve not liking the post, or from the product manager thinking it doesn’t have “brand harmony” or something.
But you can quantify the damage to your traffic and leads. And here’s what that looks like, going from eight posts per month, down to three, and then half a post:
(This projection model assumes a sophisticated organic traffic plan and an initial domain authority of 29—if you’d like to learn more about the methodology, stay tuned for an upcoming post.)
Generally, when it comes to subjective concerns about quality, the fear is that some percentage of readers will see the article, scoff, and resolve not to buy from you. Perhaps if the article really upsets them, they’ll even roast you on social media.
But do you really think whatever Surly Steve and Product Patty catch would have cost you 4,196,121 visitors and (at a 1% rate) 41,961 MQLs over three years? Because that’s what Steve and Patty themselves are costing you.
Risk Reduction Without the Carnage: Some Tactics To Try
You’re now in a bind. You understandably don’t want the content to reflect poorly on you or to risk looking like some kind of content farm. But good luck unseeing that your review process is literally devaluing your business by costing you tens of thousands of leads.
What do you do next?
What this comes down to is really a question of changing how you think about risk. I can relate to this as a grizzled veteran of the software industry.
20 years ago, quality control in the software development world meant slowing things down. Ship software once every year or two, and spend the time between releases throwing an army of low-wage human beings with binders full of test scripts at the software, trying to account for and plan every imaginable detail. Quality control meant a quixotic attempt to eliminate risk through slowness and neurotic contingencies.
But then along came agile (and later DevOps), who said, “Hey, no matter how slowly you go, you can’t control everything and, even if you could, your competition is lapping you while you fret.” The rest is history. Now software companies ship prototypes, test in production, deal with risk on the fly, and adapt as they go.
So our mission here is to stop with the waterfall approach to content and get agile. How can you control risk, knowing that you need to go fast and that a bureaucratic review process will slaughter your results?
1. Just Relax and Have One Gatekeeper
The first thing I’d suggest is that you consider just relaxing.
Humans have an interesting cognitive bias known as the “spotlight effect.”
The spotlight effect is a term used by social psychologists to refer to the tendency we have to overestimate how much other people notice about us. In other words, we tend to think there is a spotlight on us at all times, highlighting all of our mistakes or flaws, for all the world to see.
Time for a little tough love when it comes to your corporate blog. There’s a pretty good chance that nobody is (yet) reading anything you publish.
You’re going to commission content from a freelancer, put it through all kinds of reviews, hit publish, and bask in the crickets. You’ll then promote it on social media and continue to bask in the crickets. In fact, if someone came along and told you the content sucked, that’d be great because it would probably mean someone was actually reading!
As a quick and dirty rule of thumb, assume you’ll get one social media engagement for each hundred readers, and one comment for each one thousand. That’s an awful lot of people that would have to read your content before anyone would care enough to criticize you.
And here’s another thing to consider. If your content gets enough views (tens of thousands), someone will criticize you no matter what you say or how diligent you are. They’ll say that your post is bad, that your company shouldn’t exist, and that you’re a bad person. That’s just kind of the internet.
So suggestion one is just to publish and understand that content is a messy game. As Danielle Morrill of Twilio said:
If you wait so long that you’re not embarrassed of anything, you’ve waited way too long. You have to lower the bar, especially for that first post.
2. Separate Your Blog from a Knowledge Base of Articles
One of the most common things I see is companies wanting to treat their blogs as periodical publications. I understand this impulse since “blog” originally came from “web log“—an internet diary of someone’s life.
But the modern corporate blog is a long, long way from early internet wranglers hand-writing HTML over a 56K dial-up connection through AOL to describe their love of Nirvana. I can assure you that nobody is subscribing to your company’s blog, religiously reading every post about how your CFO participated in a charity fun run or whatever. Forget the idea of your corporate blog as a publication.
So here’s what I’d advise. Divide your content into two categories: what you want to tell people and what they want to know.
- “What you want to tell people” is the content that you publish and promote, touting your blog and sharing on social media.
- “What people want to know” is organic content, where people will find you through the search engine.
Make (1) your blog and (2) a series of articles called “library” or “knowledge base.” You promote the former, but not the latter, avoiding scrutiny on a lot of pieces from any directly engaged followers or customers you might have.
If you want to understand how this helps you avoid the scrutiny that you’re worried about, imagine that you sell a unit testing framework. Your panel of reviewers might balk at an article called “what is unit testing” as “too beginner” and might mercilessly let the author have it.
But you’re not out on Twitter saying, “Hey, check out our hot new content about ‘what is unit testing’.” Instead, you’re letting beginners find you through Google and facilitating a gentle introduction in a judgment-free zone.
3. Prominently Feature a Guest Blogging Program and Run Your Planned Articles Through It
Here’s another fairly simple way to avoid worries of “hurting your brand.” Distinguish the content on your site from your brand.
Consider this simple process:
- Put up a landing page explaining that you strongly believe in community engagement, and so you feature all sorts of different engineers, of different backgrounds, and different experience levels as guest writers.
- Setup a submissions form and process for screening, curating, and publishing submissions with light editing.
- Publish the submitted content alongside the content from your freelancers under this umbrella.
The aforementioned Twilio had a robust guest blogging program, as do companies like LogRocket and Digital Ocean. This drives massive community engagement as well as organic and direct traffic besides.
The key thing to understand here is that you’re separating the content from the brand, thus removing a level of fear and risk as concerns from publishing the content. You, of course, review that content and don’t let plagiarized, promotional, or off-color material slip through.
But you also don’t worry that someone is going to disagree with something in a post and deem you technically incompetent. After all, you have an easy explanation. You just point them to your “write for us” landing page explaining that you like to get a wide variety of takes from the community.
4. Stand Up a Subsidiary Property and Position Yourself as an Advertiser/Patron
There might be cases where you really, truly can’t afford anything but the most polished content appearing on your website. Most companies think this is the case when it isn’t actually true. But if you’re a founder-driven business or high-end consulting firm or the like, you might have a legitimate, calculated market position of public flawlessness.
That doesn’t mean you can’t market at scale. It just means you have to get creative and create even more separation than the guest blogging program option.
A great way to do that is to stand up a separate domain as a curated community site, and run publications through there. Perhaps you’re a niche data science consultancy that only wants to publish deep insights from your founder. No problem. You can buy up beginnerdatascience.com and turn it into a niche community site and organic juggernaut.
Hit Subscribe actually owns a site called Make Me a Programmer. It’s hardly a juggernaut, but it is a community site that started a couple of years ago as a simple experiment to see if we could build traffic with nothing but a little elbow grease and some good topic planning. And the answer was yes:
With less than 20 articles aimed at organic traffic, and all targeting keywords with minimal search volume, we’ve built a site that peaked in January with 7,000 organic visitors. So imagine what you could do with a significant budget, the eight posts per month projecting out to 330K monthly visitors after three years, and active community engagement for the site.
Or, don’t imagine, and I’ll tell you. Three years in, you’d be the sole owner and sole “advertiser” on a site with half a million highly qualified monthly visitors. Even if you don’t want to risk tainting your primary brand, doesn’t this subsidiary property seem interesting?
This is the most insulated option from your brand, and one where you can sleep pretty soundly at night with one or even zero reviewers.
Kill the Bureaucratic Review
Whenever you catch me making a super long post, you can bet that it’s sales enablement content. But the difference between me and most people that write such content is that I’m justifying passing on business, rather than begging you to work with me. There’s a decent chance you’re here because I told you during a discovery call that Hit Subscribe wasn’t a fit for you (please forgive me and thanks so much for reading all the way to this point).
Whether that’s the case, or whether you wandered by here for some other reason, my call to action and exhortation is the same. Kill your multi-party review process. Kill it now and kill it with fire.
Corporate content creation is such a weird, finely balanced thing that exists at the intersection of art and balance sheets, creativity and funnel metrics. I grew up wanting to create art as a novelist, and yet I now earn as a CEO and ruthless guarantor of ROI for our customers.
The novelist in me wants to encourage you to bring in the whole gang for reviews—to really roll up our collective tweed sleeves, puff our corn pipes and ask “what is a good blog post anyway?” But the CEO, management consultant, and customer advocate in me can’t ignore the data.
So whatever you do, whether you ever work with Hit Subscribe or not, and however you source your content, do one simple thing. Set the maximum number of gatekeeper reviewers for blog content as one, and figure out the rest from there.