Scent Detecting and the “White Dog” – Does she serve any useful purpose?

What is a “White Dog”?

Let me start by stating that, within Scent Detecting circles, any reference to a white dog has absolutely nothing to do with her coat colour. The dog may be white … but, equally, she could be black or brown or brindle or red. Her actual coat colour is completely irrelevant.

Instead, a white dog is a term used almost exclusively within the world of competitive Scent Detecting to denote an experienced Scent Detecting dog who is tasked with searching an area prior to the start of a competition. The presumption seems to be that if the white dog can successfully complete the search then it follows that it will be a fair test for all dogs competing in this same area, with the same hides, at some later point in time.  

But is this really so? 


Scent Detecting Competitions as a form of Assessment

A Scent Detecting competition is a form of assessment. It’s an examination. It’s a test. It’s a one-off event. It’s “a means of collecting data to demonstrate [that] an acceptable standard … has been reached” (Hand, 2006).

To this end, there’s an expectation that the competition Judge, guided by his or her particular competitive organisation’s regulations, will set “appropriate tasks” (Hernandez, 2012) for all competitors working at a particular level of expertise. The use of a white dog prior to the start of a competitive event is supposed to confirm that this is the case – the searches are appropriate and the competition fair

But, how can you be sure that “appropriate tasks” have been set for you and your dog? What actually constitutes a fair test of your dog’s abilities? How might you recognise a quality assessment process? Does the use of a white dog actually address any of these very understandable concerns? 

Well, according to Brady (2005), for any assessment process to perform its function – provide useful feedback and act as a gatekeeping mechanism to halt further progress until a particular standard has been reachedthen it must be valid, reliable, discriminatory and practical. These are referred to as the “Cardinal Criteria” of assessment (Quinn, 2000) and are considered the hallmarks of a quality assessment process.


Validity refers to the extent to which the assessment process – test or competition – measures what it is intended to measure. It’s about finding the correct measuring tool for the job in hand. For example, if you want to assess the weight of something – how heavy it is – you’d be far better to reach for a set of weighing skills than a tape measure or stop-watch!


Similarly, if you’re assessing a dog’s Scent Detecting skills – which can encompass an enormous range of attributes – then you’ll need to use the correct measuring instrument.   

One possibility is to use a criterion referenced assessment process where the performance of the competing Scent Detecting dog and handler can be measured against a pre-determined set of criteria, or standards, that are considered to reflect the necessary qualities, skills and attributes of a reliable Scent Detecting team.

Searches and hides are planned to reflect the criteria set out within the criterion referenced assessment document. The competing dog and handler will then either achieve these criteria, partially achieve these criteria or fail to achieve some or all of the criteria. 

Working a white dog prior to the start of a competitive Scent Detecting event will do little to strengthen the validity of the assessment process and here’s why – 

  1. By it’s very definition, a white dog (and her handler) will be an experienced Scent Detecting team.
  2. The competition may be specifically for those Scent Detecting teams with far less experience than the white dog and her handler. 
  3. As an experienced team, if the white dog and her handler successfully complete the planned searches, this provides NO information as to the appropriateness (validity) of the searches for the less experienced competitors that will follow on.
  4. All that can be concluded from this exercise is that the white dog and her handler were able to successfully complete the searches at that particular moment in time.
  5. The white dog and her handler have not been able to validate the criteria used to plan the searches and hides. They have been unable to provide evidence that the competition actually measures what it is intending to measure. ie; the Scent Detecting qualities, skills and attributes of a less experienced Scent Detecting team.

Ultimately, when it comes to ensuring the validity of any assessment process – test or competition – the responsibility for setting appropriate searches must lie with the Judge and their particular competitive organisation rather than with the white dog and her handler.


The reliability of any assessment process is concerned with the consistency of results. Put simply, would different competition Judges, using the same assessment criteria, agree on the quality of performance of a particular Scent Detecting dog and handler team? Would they award the same, or similar, marks? If so, then the assessment process would be considered reliable. The Judges have both interpreted, and applied, the assessment criteria in a similar fashion.

An assessment’s reliability can be strengthened by the construction of carefully expressed criteria, ones that are less vulnerable to individual interpretation by the competition Judge. As White (1986) states, if you want to form a clear, unambiguous picture of an individual’s progress it’s important to focus on concrete, directly observable behaviour. Carefully expressed criteria can help you do just that.

Clearly, the use of a white dog immediately prior to the start of any Scent Detecting competition can do nothing to strengthen the reliability of the assessment process. Increasing the reliability can only be achieved during the early planning stages, when criteria are first being considered.

And that’s precisely where a white dog might be helpful – during the initial writing of criteria – long before any competition takes place. If a group of Judges, using a set of proposed criteria, can agree on a white dog’s performance then this could demonstrate a reasonable degree of reliability in the assessment process.  

Discriminatory Powers

Discrimination is the ability of the assessment process to differentiate between varying levels of ability. Assessment processes need to be able to discriminate between those Scent Detecting teams that have reached the required standard and those who have not. Assessment processes, including competitive events, should not be so difficult that all Scent Detecting teams are likely to fail or so easy that all are likely to pass.

To be clear, the ability of an assessment process to discriminate between levels of achievement is considered a good thing. Gate-keeping is an important feature of any assessment process.


But how does the use of a white dog and handler team help here? How does a white dog successfully completing one or more searches prior to a competition ensure that the assessment process will discriminate between levels of ability in the dogs that will be competing later? 

As discussed earlier, the presumption seems to be that if the white dog can successfully complete the search then it will be a fair test for all dogs competing in this same area, with the same hides, at some later point in time. If this were the case then there would be an expectation that all competing dogs would be as successful as the white dog. In other words, the competition has failed to discriminate between competitors. The white dog has served no useful purpose.


Assessment processes should be practical to implement in terms of financial costs, time and ease of administration. If the assessment process is not practical then this will impact all of the other cardinal criteria as corners are cut and / or assessment criteria incorrectly applied. 

Many competitive organisations do not insist on the use of a white dog prior to the start of a Scent Detecting competition. There may be no suitably experienced dog available, and time may be short. The use of a white dog may not be practical.

Now the question must be, if a white dog is not mandatory for all Scent Detecting competitions, what possible justification is there for using one at any competition?  

And now for some common sense!

  1. white dog, by definition, refers to an experienced dog and handler team.
  2. By definition, an experienced dog and handler team should be able to out-perform less experienced competing teams.
  3. white dog that successfully completes all searches prior to the start of a competitive event can provide only limited information about the appropriateness of the planned searches. 
  4. The information provided by the white dog is simply that that dog, at that moment in time, working under those particular conditions has successfully completed the searches. It provides no useful information on the appropriateness of the planned searches for any competing teams that will be following on. 
  5. All competing dogs, working at later points in time, will be subject to markedly different conditions to those experienced by the white dog. Changes in temperature, air and wind movement. Contamination of the search area by the target odour and odours from other dogs, handlers and officials. The white dog’s earlier success may simply have been as a result of working at the time she did and under the prevailing conditions at that time. This  goes some way to explaining why many competitive organisations will use a draw system to allocate the running order for competitors.
  6. Every dog is a thinking, feeling, individual. The use of a white dog cannot address the individuality of every Scent Detecting team. Far better to set clear criteria at an organisational level, that reflect performance expectations for different levels of experience, than rely on a white dog to decide whether particular searches are appropriate for a particular competition.


In conclusion …

Assessment is central to the learning process. It’s important that we get it right. The use of the white dog in competition settings does little, if anything, to strengthen the assessment process and as such cannot support the future learning of competing Scent Detecting teams. Instead, all that the white dog may do is lull the competition Judge into a false sense of security, allowing them to believe that they’ve set a fair assessment for all competing dogs. This may well be far from the case.


Final Note

As with all blogs, I include a reference list. This allows you to investigate the topic a little further, check out the sources of my information and decide for yourself whether my interpretations of the literature represent an accurate reflection of the author’s original work. Happy reading.

© Lesley McAllister – Scent : Detect : Find Ltd 2021

References / Further Reading

  1. Brady A (2005) Assessment of learning with multiple-choice questions. Nurse Education in Practice. 5. 238-242
  2. Hand H (2006) Assessment of learning in clinical practice. Nursing Standard. 21.4 48-56
  3. Hernandez R (2012) Does continuous assessment in higher education support student learning? Higher Education. 64. 489-502
  4. Quinn FM (2000) The Principles and Practice of Nurse Education. 4th Ed. Cheltenham: Stanley Thorne (Publishers) Ltd
  5. White OR (1986) Precision Teaching – Precision Learning. Exceptional Children. Special Issue: In search of excellence: Instruction that works in special education classrooms. 52.6 522-534



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s