The Magic of the Cycling Helmet:
Observations on Bicycle Helmet Safety, Protection and Efficacy
Bob Wheeler, sometime statistician
The question is, “what things does a cycling helmet protect you from, and how effective is it?” Do you know? Really! I’ll tell you what I think, and maybe encourage you to puzzle a bit about it when the mood strikes you. The Wikipedia article on bike helmets is one place to start.
A helmet is guaranteed to protect you from cuts and scrapes and mussed hair and other hazards of the road. It is a magic amulet that shields you from bad things and guarantees your safety as you mix with autos that sometimes squeeze you to the side of the road. In addition, wearing one makes you a good person, knowledgeable about cycling and clearly removed from the hoi polloi who wear odd clothes and ride upright clunkers. In any case, the club has a rule about it.
I wear a helmet when I cycle, and have done since the first Bell’s were made. I have crashed, broken helmets and showed the pieces to all and sundry, saying “what a lucky boy am I.” I have friends who have done the same. A helmet is such a comfort and so habit forming that I hesitate even to ride to the end of my driveway to check a bicycle repair without wearing one. I recommend the wearing of a helmet to all and sundry, and feel queasy when I see a happy-go-lucky fellow peddling along the road without one. How about you?
So what is the real dope about helmets? I’ll tell you very briefly: they may help, but if they do, it is not as much as some think. There have been no randomized trials designed to test the efficacy of helmets, but statistical studies in the literature tend to show, in spite of extensive bickering, Curnow (2005), that helmets apparently help in hospital-worthy accidents. All papers containing data in the 1987-1998 years are summarized by Atewell (2000). The figure of merit used in these papers is an odds ratio which compares the relative odds in two categories. The odds ratio estimates in these studies vary considerably, partially because of the differing data sets and partially because of the study designs, and partially, I argue, because the population odds ratio is not a constant. In any case, an overall value was obtained via a meta analysis (don’t ask) which indicated an odds ratio of 0.40 for helmets verses no helmets; in other words helmet wearers had head injury odds of about half of those for unhelmeted riders. All but one of these studies found that helmets offered protection from head injury.
Odds ratios are useful, but what is wanted is a direct statement of the risk of a head injury for helmeted and unhelmeted bicyclists. I have calculated these from the Atewell (2000) data, which assumes that a hospital-worthy accident has occurred. My calculations (see the Appendix) give an estimated probability of head injury for a helmeted bicyclist of about 18% as compared to about 40% for an unhelmeted bicyclist. The estimated relative risk is the ratio of these values, 0.35, which is statistically smaller than unity, indicating a risk reduction due to helmets. This value is a statistical estimate and one should not take it as absolute, but rather make allowance for how it may vary from one set of data set to another. A 95% confidence interval for the relative risk is from 0.34 to 0.55.
One of the earlier studies, Thompson (1989) produced an odds ratio of 85%. It is frequently cited, for example, by the CSPC (1998), as the reduction in the risk of head injury when wearing a helmet. This is not correct. It does not imply as most people who cite it suppose, that wearing a helmet will reduce head injuries by 85%. The relative risk in this particular study is actually 0.35, with a 95% confidence interval of approximately 0.10 to 0.63.
This is not the place for a detailed criticism of the methodologies used, but I must at least note, as others have, Franklin (2000), that the effects ascribed to helmet wearing may in fact be due to helmet wearers riding more safely and thus having fewer accidents involving head injury. Only ne study has examined the injuries among those bicyclists who did not have head injuries but were involved in hospital-worthy accidents. This paper, Spate (1991), found that a measure of injury severity (ISS) was much smaller, significantly so, for helmeted bicyclists. The study involved only accidents with motor vehicles and the data suggest that helmeted riders are involved in fewer high impact crashes than unhelmeted riders. I can certainly rationalize this result, since careful cyclists are likely to be injured more often by being overtaken by a car than they are to be injured while riding across an intersection in front of a car: the latter encounter must surely result in the more serious injury.
A point to note here is that in all the papers cited, the helmets were not assigned at random, but chosen by the individual riders, which could make it seem that helmets were responsible for the reduced risks when in fact it was only due to helmet riders being careful. There is no way to tell from these data.
It is interesting to consider whether or not a typical WCBC member would be taking an unacceptable risk by riding without a helmet. To figure this out we need to adjust the risk by the actual exposure to hospital-worthy accidents. It is well and good to know about a reduction in risk, but if the event to which the risk applies occurs once in a blue moon, what is it worth? It turns out that the number of accidents that are hospital-worthy is pretty small for WCBC members. I don’t remember exactly, but I think the club had only two or three hospital-worthy accidents last year, thus the odds of a WCBC member having a hospital-worthy accident are perhaps 3 in 350 or the estimated probability of a hospital-worthy accident is on the order of 0.009, about 1%. This should be combined with the conditional estimated probabilities, 0.18, and 0.40, of head injury cited above, to obtain unconditional probabilities. Thus one has 0.009×0.18 or about 2 chances in 1000, for helmeted WCBC cyclists and 0.009×0.40, or about 4 chances in 1000, for unhelmeted WCBC cyclists. In other words an unhelmeted WCBC cyclist has a risk about 2 times that of a helmeted cyclist; however, the actual odds in both cases are long. It is worthwhile to remember that no matter how small the probability of an event, that event will come around in its appointed time.
Of course, helmets are good for other things than preventing serious head injury. I’d wear a helmet if for no other reason than that during my next tumble it would take the scratches and dings rather than my head.
Children have a much greater risk than do WCBC members because they suffer serious accidents with higher frequency: inPennsylvania, children under 15 have a rate in the neighborhood of 50 per 100,000 for hospital-worthy accidents as compared with 10 per 100,000 for adults, PA (1977). Fortunately, the rate is lower in Delaware, DE (2008). In the years, 1996-1999 there were 74 hospitalizations of children and in the years 2002-2005 there were 118, which works out to about 24 per year. Without helmets, one could expect about 24×0.40 or about 10 to be head injuries for unhelmeted riders and 24×0.18 or about 4 for helmeted riders — but tell a kid that they must wear a helmet and they will get mom to drive them. In comparison, there were 634 hospitalizations of children due to non-bicycle falls in 2002-2005, or 158 per year. Helmets, by the way, are cheap: a Bell Radar helmet can be had for about $16. It meets the CPSC standard just as well as the $200 designer specials that some wear.
This naturally leads to the thought that helmets should be legally mandated. Unfortunately, mandatory helmet wearing legislation has not been much of a success. Both in Australia, which passed it in 1990, and in various parts of the US such as California, studies have shown that the principal effect of legislation is to reduce cycling without a concomitant decrease in injuries,Franklin (2000). This lack of success makes one wonder about helmet efficacy, and the calculations given above. Reports which claim success for the legislation have generally been written by governmental agencies with an axe to grind.
Helmet standards are a problem. If you look inside your helmet you will find a sticker indicating that it meets the CPSC or Snell specification. All specifications, Standards (2008), are designed to make sure that a solid ball strapped inside a helmet will decelerate the ball to a specified degree by crushing the foam or other material. Hey, my head isn’t a solid ball (it deforms under impact like a basketball). Jim Sundahl of Bell Helmets examined returned helmets for a time and failed to see crushing in infants helmets Sundahl (1998), which may or may not apply to adult helmets. I’ve seen broken helmets of course, but that isn’t what is being tested – so what good is the standard? (Well for one thing, it’s a great advertising gimmick, and it keeps the government bb stackers off the manufacturer’s backs.)
As a final item, there is some speculation that modern helmets may in fact be harmful since they can in theory increase the rotational effects which the brain experiences during impact by making the head larger. In addition, three studies have shown that neck injuries are increased for helmet wearers: Wasserman (2000), McDermott (1993), Rivera (1997). The fact that it is now the fashion to make helmets which do not protect the back of the head is also of concern. A substantial number of head injuries in the various studies seem to occur in parts of the head not protected by helmets. Motocross helmets, such as the Specialized Deviant, are available that offer greater protection to the back of the head and face: blows to the jaw transmit energy to the back of the head, and facial scarring is a common result of a fall.
In summary, wear your helmet, replace it every few years because both the shell and the polystyrene liner will deteriorate with protracted exposure to ultraviolet light, making them brittle, and a brittle helmet which breaks, destroys whatever protection there may be from the foam lining; make sure the helmet fits properly (I often see club members with twisted or slack straps.); but it is probably not a good idea to expend a great deal of effort in promoting helmet use at the expense of things that have a greater chance of being useful to cycling, such as insisting that highway departments pave over cracks at the sides of roads. The UK’s national cyclists’ organization, CTC (2004), agrees with this view. End of sermon!
CPSC. 1998. “News from the CPSC”. http://www.cpsc.gov/cpscpub/prerel/prhtml98/98062.html
CTC. 2004. CTC Policy Handbook – March 2004. http://www.ctc.org.uk/DesktopDefault.aspx?TabID=3839
DE, 2008. 2008 Childhood Injury in Delaware.. Delaware Health and Social Services
Atewell, R, Glase, K., McFadden, M. 2000. “Bicycle helmets and injury prevention: a formal review”. Available from Australian Transport Safety Bureau, PO Box 967, Civic Square Act 2608
Curnow, W.J. 2005. “The Cochrane Collaboration and bicycle helmets”, Accident Analysis and Prevention 37 (2005) 569-573.
Franklin, John. 2000. “The effectiveness of cycle helmets”. http://www.cyclecraft.co.uk/helmets.html
Mantel, N. and Haenszel, W. (1959) Statistical aspects of the analysis of data from retrospective studies of disease. J Natl Cancer Inst 22, 719-748.
McDermott, F.T., Lane, J.C., Brazenor, G.A., Debney, E.A. 1993. “The effectiveness of bicycle helmets: a study of 1710 casualties”. Jour. of Trauma. 34. 834-845.
PA. 1997. Bicyclist injuries in Pennsylvania during 1994. Pennsylvania Department of Health
Rivera, F.P., Thompson, D.C., Thompson. R.S. 1997. “Epidemiology of bicycle injuries and risk factors for serious injury”. Injury Prevention. 3. 110-114.
Spate, D. W., Murphy, M., Criss, E., Valenzuela, T. D., Meislin, H.W. 1991. “A prospective analysis of injury severity among helmeted and nonhelmeted bicyclists involved in collisions with motor vehicles”. Jour. of Trauma. 31. 1510-1516.
Standards. 2008. Bicycle Helmet Standards http://www.bhsi.org/standard.htm#CPSC
Thompson, R.S., Rivara, F.P., Thompson, D.C. 1989. “A case-control study of the effectiveness of bicycle safety helmets”. New Engl. Jour. Med. 320,1360-1267.
Thompson, M, Rivara, F. 2001. Bicycle related injuries. “American Family Physician”. 63. No.10. May 15.
Wasserman, R.C., Buccini, R.V. 1990. “Helmet protection from head injuries among recreational bicyclists”. Am. Jour. Sports Medicine. 18. 96-97.
The studies of helmet wearing risks all make use of the odds ratio, which has statistical advantages since it may be adjusted by well known statistical methods for confounding variables, such as age and type of accident.
The observed odds ratio t is a statistic that is used to measure the disparity of values in a 2×2 table such as in Table 1.
Table 1 Observed 2×2 table
It is calculated by dividing the observed odds in the first column o1 = (a/c) by the observed odds in the second column o2 = (b/d), which givest = o1/o2 = (ad)/(bc). It is important to note that t is symmetric and can equally well be taken as the odds ratios for the two rows. In addition, it is unaffected by multiplying rows or columns by a constant; such a multiplication corresponds to changing the amount of data taken for a given row or column, other things being equal.
The assumption is that the observed values are a random sample of N observations from a table involving probabilities as in Table 2.
Table 2 Table of probabilities
One may form odds and odds ratios for either rows or columns: for rows, p(a)/p(b) and p(c)/p(d), with the odds ratioθ=(p(a)p(d)/(p(b)p(d); while for columns p(a)/p(c) and p(b)/p(d) with the same odds ratio θ. The fact that the odds ratio is the same allows inferences about columns that might not otherwise be possible due to the way in which the data was collected; in addition, the fact that the odds ratio does not depend on the marginal probabilities P and Q makes it useful in cases where estimates for these probabilities are not available.
Table 3 Table of column risks
Risks are defined conditionally with respect to rows or columns, thus the risk of Head Injury given Helmet and No Helmet are p=p(a)/Q and q=p(b)/(1-Q), respectively, and their ratio λ=p/q is the relative risk: these are shown in Table 3. In general the relative risk is the most interesting quantity; however it does depend on the marginal probabilities and cannot ordinarily be calculated when the row data is obtained from different samples with arbitrary numbers of observations in each.
The odds ratio θ is a biased approximation to the relative risk λ; in particular θ= λ(1+ δ/(1- p)), where δ= p-q, which may be seen to be less than or greater than 1 depending on the sign of the difference between p and q. It is common to report an estimate of θ and interpret as if it were an estimate of λ. The usual justification for this is that it is approximately correct when p is small; the magnitude of δ seems to be ignored in this justification.
The relative risk λ, as stated above, is the quantity of most interest. It may be estimated from cohort and prospective studies; a prospective study for example, might choose individuals at random from the cycling population and follow them until a sufficient number have been injured. Such estimates are not proper for case-control studies such as those referenced here, because there are two populations, one consisting of head injured cyclists (the cases) and the second of injured but not head injured cyclists (the controls). The numbers of individuals in the two samples are arbitrary; also it is difficult to determine whether or not the characteristics of the two populations are the same. In a prospective study, the sampling is with respect to helmet wearing, while in a case-control study it is with respect to injuries.
In a prospective study, one can calculate the maximum likelihood estimates of p and q from the columns: they are a/(a+c) and b/(b+d) and their ratio a(b+d)/b(a+c) may be taken as an estimate of the relative risk λ. These estimates involve data from both rows of the table. In a case-control study the two rows can differ by an arbitrary amount depending on the decision of the investigator as to how much data to take. This makes the maximum likelihood estimates useless. The odds ratio estimate t is unaffected by such considerations since the sample sizes cancel; which is why it is used and why users try to argue that it approximates the relative risk λ.
There are conditions under which the relative risk may be calculated for case-control studies such as when the relative size of the case to control populations is known. If the marginal probability for the case population is P, then it is possible to calculate a Bayesian* estimate of p and q as r(p)=a/(a+cs), and r(q)=b/(b+ds), where s=(a+b)(1-P)/(c+d)P . The estimated relative risk, r(p)/r(q), has the estimate l=a(b+ds)/b(a+cs). The scaling, s, changes the dimensionality of the elements from the second row into that of the first row, and l differs from the maximum likelihood estimate only by a sample size weighting, an intuitively satisfactory result.
The odds ratios were obtained by combining the Atewell (2000) data according to the Mantel Haenszel (1959) procedure. The risks were the Bayesian estimates obtained as weighted geometric means of the risks in each table; the weights were the square roots of the sample sizes. The value of P was assumed to be 1/3, from the data in PA (1997). Thompson (2001) reported it to be in the 22% to 47% range. It should be noted that the estimated relative risk does not change very much over the range from P=0.2 to P=0.4, and the confidence intervals are nearly the same.
An R package containing functions for making these calculations and using simulation to obtain significance and confidence levels may be downloaded from Helmet.
* p(I|H)=p(H|I)p(I)/(p(H|I)+p(H|~I)) is estimated by (a/(a+b))P/(aP/(a+b)+c(1-P)/(c+d)) or a/(b+ds), where s=(a+b)(1-P)/(c+d)P.