I have at times been told that I am a conservative reviewer in terms of my scores. I don’t believe that to be true. Rather, I believe that differences in my scores and those from most other reviewers can be attributed in large part to the manner in which we each taste wines.

In short, I taste wines blind in varietal sets. When a person tastes blind, those scores are completely based on what is in the glass. That is not the case when tasting non-blind. There the reviewer is being influenced by a whole variety of factors. These include knowledge of producer and wine pedigree, vintage, price, and other things. To say otherwise would be insincere.

Imagine. In one case, you are tasting a wine blind trying to evaluate its quality based on your experience with a region. In another, you are aware that this is the top end wine from a producer widely regarded as the best in a particular growing region, and the vintage is thought to be the best of the decade.

Are those two wines both going to be considered in the same fashion and score the same? Of course they are not. In the latter case, the wine is getting a big leg up by pre-conceived notions.

You can test this easily for yourself. Give the same wine to two groups of people. Tell one group it is an expensive wine and was ‘Wine of the Year’ at a prestigious publication. Tell the other it was something you found steeply discounted at Cash & Carry. See how differently the two groups perceive the wine.

For this reason, if you look at publications that review wines blind in a controlled setting, such as this one, these scores will naturally be lower than those that are reviewing non-blind. This is not to say that scores from blind tasting are more conservative. Rather, I believe scores for reviewers rating non-blind are likely inflated in some cases by various biases introduced.

As a result, when comparing scores across publications, consumers will often see differences – sometimes very large differences – for the same wine. How should consumers or wineries for that matter interpret scores then?

Unfortunately, comparing scores across publications ranges from difficult to impossible because of four factors. The first is the manner in which the wines were sampled, blind vs non-blind, which I discussed above. The second is the environment in which the wines were sampled. The third is reviewer experience. The fourth is reviewer palate. I will explore each of these.

Tasting in a standardized vs. a non-standardized environment

All of the wines that I review are tasted in a standardized, controlled environment. This means that every wine is tasted in the same way under the same conditions. I taste in the same location, at approximately the same time, with the wines at the same temperature, and using the same stemware.

In contrast, some critics review wines in a variety of settings – at wineries, at home, at mass tastings, and sometimes a combination of all three. Some wines sampled at home might be tasted with 40 other wines, or they might be tasted individually with dinner.

To be clear, scores for wines tasted in a non-standardized environment – at home versus at a winery versus at a mass tasting – can’t even be compared to each other let alone scores from other publications.

These are radically different approaches. Subsequently, they will naturally produce very different scores, even for the same reviewer and the same wine.

Tasting in non-blind, non-standardized settings introduces biases that affects scores

A common place many reviewers taste wines is at the winery with the winemaker present. This naturally introduces biases.

I have confirmed this by comparing scores that I have given to wines when visiting a winery or meeting with a winemaker (these scores and notes are strictly for informational purposes and are never published) to scores for the same wines sampled blind and in a standardized setting (these are the scores that are ultimately published).

When I have done so, I have noticed that wines sampled blind and in a standardized setting typically score one to two points lower than they did when scoring them at the winery. I have seen as much as four points lower. It is extremely rare in my experience to rate a wine higher blind and in a standardized setting than I have when visiting a winery and taking notes.

Intellectually, this makes sense. Producers are excited about their wines and, when you meet with them to taste their wines, they are trying to get you, the critic, excited about them as well. And guess what? It works.

This means that wines tasted with the producer – or in fact even a producer who is a better pitchperson for their wines – have an inherent advantage over wines that are not tasted with the producer or where the producer does a less effective job at ‘selling’ the wine to the reviewer.

Is that fair? Shouldn’t we be judging the wine on its merits in the glass not the song and dance the reviewer is being given while tasting? Moreover, the consumer isn’t getting that same song and dance. They are just tasting the wine.

Many consumers have, however, had a somewhat similar experience. They are on vacation, tasting at the winery, and are being regaled by tasting room staff or perhaps the winemaker him or herself about the merits of the wine. They love the wine and buy some, only to arrive home and find it not quite as delightful. That is an example of the bias that can occur tasting in a non-standardized setting.

Mass tastings introduce biases that affect scores

Many reviewers also score wines at mass tastings. This also introduces biases. First, there is an effect from alcohol. You can register above the legal limit after tasting 100 wines in a day, even if you are spitting all of the wine. I have confirmed this with a breathalyzer on multiple occasions.

Second, do wines tasted with 100 others really have the same shot as a wine tasted with a few others and perhaps over a series of days at home or wines tasted at a winery with the winemaker present? Of course not. Yet many publications treat them as the same.

Why do many publications review in non-standardized settings? For one simple reason: It greatly cuts down on the amount of work they have to do.

For those doing mass tastings, it requires considerably less effort on the reviewer’s part to have someone else collect, unpack, and organize hundreds of wines that the reviewer then tastes than to have the reviewer (or workers at a publication) do so. Similarly, it also involves less work tasting wines at a winery than having the wines shipped, unboxing them, and tasting them.

However, to me, it is more than worth the additional effort to taste blind and in a standardized setting to help remove bias and ensure consistency of process and therefore consistency across scores. Wineries deserve it; the wines deserve it; and consumers deserve it.

Reviewer experience

Another important factor to understanding scores is reviewer experience. This is true for both experience with the region in question and overall experience reviewing wine.

The example that I commonly give is Burgundy. I could fly to Burgundy, taste several hundred wines in a few days, score them, and then publish those scores. I’ve been tasting and reviewing wine for over two decades.

However, while I might drink and enjoy Burgundy regularly, I don’t live and breathe Burgundy. How relevant would those scores truly be? Would they be as relevant as someone who has been tasting wines from the area for decades? Likely not.

Similarly, if someone were to come to the Pacific Northwest with no knowledge of the region’s wines, I’m always interested to see what they might have to say. But those scores come with an asterisk because of the lack of experience with the wines, regardless of what the person’s experience with world wine might be.

Note that reviewer experience very much intersects with the biases introduced by how the wines are tasted. Let’s say I’m new to tasting a region, but I know that a certain producer is considered one of the best in the region. That makes my job easier, right? But what about the side project from the same producer that the reviewer doesn’t know about? It doesn’t get that same consideration, and the score might be negatively impacted.

We also see an intersection with large format tastings. When you are tasting a large number of wines in a single sitting, a lot of things can go wrong. One of them is the effect of alcohol. Another common one is tannin accumulation. This is when wines start to seem more tannic and bitter than they actually are.

For a reviewer with knowledge of a region, it’s pretty clear when this is happening. First, you start to notice more tannins on all wines and take steps to correct it. Second, even if you haven’t noticed, when you unblind wines, you might see wines that you thought were overly tannic that shouldn’t be based on prior experience. The reviewer without experience with the region just scores the wine poorly and moves on. The reviewer with experience backtracks to confirm the impression and make sure it wasn’t an error on their part due to tannin accumulation.

These are not merely intellectual considerations. I have seen examples in Washington where these exact scenarios have played out. As someone who has tasted the state’s wines broadly for 20+ years, the issue popped off the page. Unfortunately, that issue resulted in some very poor scores for a number of highly regarded wineries, all because of the reviewer’s lack of experience with the region. That is bad for all concerned.

Bottom line, experience matters. Experience with world wine also matters. A critic must always taste broadly and put wine in a larger context.

Personally, in addition to reviewing wines from the Pacific Northwest, I taste as broadly as I can, and I review and score those wines and compare my scores and notes to those of others. Bottom line though, experience with wine is no substitute for regional expertise.

Reviewer palate

The final factor is reviewer palate. Every individual has a unique palate as well as certain sensitivities and insensitivities. These sensitivities can potentially make one favor or disfavor certain wines.

The example I frequently give is wines from the Rocks District in Walla Walla Valley. These can be polarizing wines, where some people love them and other people very much do not, even if they have exceptional palates.

Some find this confusing, but I find these differences in perception understandable. What if a particular nuance of the wine that one person finds interesting, another is smelling or tasting 100 fold? (100 fold differences in sensory perception are not uncommon.) To the first person, it would seem pleasing, whereas to the second it would seem off-putting.

As I have stated elsewhere, when I taste and review wines, I am trying as best I can to put aside by personal preferences. That is to say, I am trying to rate the wine on its merits for the style in which it is being made, regardless of whether it is a wine I might personally want to drink at home. However, we all still have different sensitivities, more or less, to certain things. This can impact score.

How to compare scores across publications

Given that I review wines in a blind, standardized setting (and did so previously at Wine Enthusiast), the most direct comparison to my scores would be those from other publications that do the same.

There are three other publications I am aware of that consistently taste blind and in a controlled environment. They are Wine Enthusiast, Wine Spectator and Wine & Spirits. There the differences you see in my scores for the same wine are largely in critic palate and experience rather than the setting in which the wine was tasted.

For all other publications, the differences you are seeing could be due to how the wine was tasted (blind vs non-blind), the setting it which it was tasted (standardized vs non-standardized), critic palate, and critic experience as well as any combination thereof.

These compounding factors make it impossible to meaningfully compare across scores. Unfortunately, all consumers see is the score, not what is behind it.

Final thoughts

How wines are tasted is critically important to the scores the wines receive. For this reason, consumers should know and care about how wines are reviewed. Unfortunately, differences in how reviewers and publications taste and review wine is often largely opaque to consumers.

Few publications provide detailed information about how wines are sampled for review, in part because it is not in their best interest. It is my strong belief that all publications and critics should be transparent about the manner in which they taste wine.

Consumers should know if a wine was tasted blind in a standardized setting or non-blind at a winery with the winemaker or alongside 100 other wines or at home. It makes a difference.

I am advocating strongly here for blind tasting in a standardized environment when reviewing wines. To me, reviewers tasting each wine in a standardized environment should be non-negotiable. It’s the only way to ensure consistency across one’s own scores. At the very least, how a publication scores wines should be transparent to all.

The manner in which wines are reviewed shouldn’t just be important to consumers. It should also be important to wineries and retailers.

Many wineries and retailers go with promoting the highest score they have for whatever it is they are trying to sell. This is true even if the way in which the wine was scored can be somewhat dubious. This is understandable on some level; they are trying to sell something. The long-term effect of that, however, as we have seen, is score inflation. That leads to a plank all will eventually fall off.

I believe consumers, wineries, and retailers should be more selective. At the very least, consumers, retailers, and wineries should be aware of these differences and demand that publications and reviewers are more transparent about the conditions under which they taste and review wines. To me, the validity of the wine ratings themselves depends upon it.

Read more about how to interpret scores at ‘Who cares about wine ratings?’ and ‘Can scores be compared across publications?’

Updated June 2023.