Representing users, transforming services | Polar Insight
Representing users, transforming services.


The Polar Post

Avoiding selection bias

Recognising bias leads to better research and better outcomes.

Recognising bias leads to better research and better outcomes.

Selection bias is a term used when a pool of research participants, or their subsequent data, is not representative of the target population you are analysing. A variety of simple mistakes can cause selection bias in social research and, unfortunately, this error occurs more often in projects than one would expect. It will make findings less reliable and, ultimately, less useful.

In this short post, we will explore three forms of selection bias - sampling bias, pre-screening bias and participant attrition - discussing ways in which they can be avoided.

Sampling bias

Although there are multiple ways in which sampling bias may creep into your project, all will lead to the conclusion that the population being studied does not provide the data you require in order to draw meaningful conclusions about what they think, say, feel and do. A common form of sampling bias is when participants are recruited via self-selection in which specific groups of people may be drawn to taking part in studies due to self-selecting characteristics - they may be more chatty, extroverted etc. or simply enjoy giving their point of view. Self-selection may be difficult to avoid, especially when a project require volunteers. If you’re able to, however, draw from a sample that doesn’t require self-selection; or, as Polar Insight does, actively recruit based on the characteristics you’re looking for.

Pre-screening bias

Another common pitfall is a biased participant screening process which leads to selecting participants that share too many common characteristics. For large projects, a double-blind process may be necessary to avoid this bias, in which choices are made by an individual independent of the research goals.

Participant attrition

Your sample can also be affected during your project. For example, if participants happen to drop out of the research in a biased way (i.e. it is not random), then the remaining participants are unlikely to represent the original sample pool, and therefore, the population at large. This dropout rate – otherwise known as participant attrition – is most commonly seen in social research that requires ongoing interventions. To avoid this bias, it is important to follow up participants who have dropped out, in order to determine whether their attrition is due to factors common to other participants or are indeed random. Be weary that if too many participants drop out of your research, you may have to restart your study altogether, as too few participants can limit the strength of your conclusions.

Martha Schlee