Up at 5AM: The 5AM Solutions Blog

Breast Cancer Risk by Family History and Genetics

Posted on Thu, Mar 25, 2010 @ 01:07 PM

The New England Journal of Medicine published an article last week, by Wacholder, et al, about how genetic factors could be used to predict risk of getting breast cancer, and compared those results to existing risk factors. There have been quite a few news articles and blogs about this study. An editorial in the same issue of the journal calls it a 'tiny step' towards better risk assessment, but also says calling it a disappointment is 'a little premature'. A UPI storyis entitled 'Genes don't help predict breast cancer'. Several articles are more encouraging, such as the ones from blogs at 23andMe and DeCODE, although you could argue that those companies have some bias towards genetic results being more significant.
But let's take a look at the details a little more. The study uses factors from a traditional predictive model, called the Gail model, to predict a woman's chance of getting breast cancer. The Gail model takes into account such factors as age at first childbirth and number of first degree relatives with breast cancer. They then used 10 genetic factors that have been linked to breast cancer risk, and then they combined the genetic and non-genetic factors. In all these cases they constructed a logistic regression model on the factors, training on a set of about 5,000 women with breast cancer and 5,000 women without. So it's worth pointing out they didn't actually use the Gail model, they constructed their own model using some of the same factors.
They compared the quality of the models using an area under the curve (AUC) calculation of a receiver operating characteristic (ROC) curve. A ROC curve plots the tradeoff between sensitivity and specificity for a test, varying the cutoff which is used to decide what is a positive and what is a negative for that test. In this case, the logistic regression models produce a probability of the person getting breast cancer, a number between 0 and 1. A probability above 0.5 would give you a certain number of true and false positives, and allow you to calculate sensitivity and specificity. But a cutoff of 0.75, for instance, would produce different sensitivity and specificity values. A ROC curve plots the sensitivity and specificity for all possible cutoffs between 0 and 1. The area under the curve is a measure of the accuracy of a test and the value for a random test would be 0.5 (a diagonal line) because any increase in sensitivity would be offset by a decrease in specificity. AUC's greater than 0.5 indicate a test that is better than random. The AUC can also be defined as the chance that a true positive will score higher on the test than a true negative.
In the study, the AUC for a test based on the Gail model had an AUC of 0.58. The best genetic-only model (they tried several variations) was 0.597. When they combined the non-genetic and genetic factors the AUC increased to 0.618. So for the combined model a woman truly at risk of getting cancer would have a higher risk score than one not at risk only 61.8% of the time. In the appendix they show that the error bars on those numbers, and the effect of overfitting, is smaller than the differences between those AUC numbers. Even so, the differences are not dramatic, so I can see why some reports claimed there to be only modest or insignificant differences.
The 23andMe and DeCODE blog posts have some interesting spins on these numbers. DeCODE's post, by their founder Kari Stefansson, says we should considering the percentage increase of how far the AUC's are over 0.5 (the AUC of a random test), so 0.08 for the non-genetic test increases to 0.118 for the combined one, and increase of greater than 45%.
The 23andMe blog has another take, saying there is a 'net 12% improvement in risk classification' when comparing the non-genetic test to the combined. I can't figure out where this 12% came from, and I've posted a comment on their blog asking for clarification. Another interesting thing the 23andMe blog mentions is another paper in Nature Precedings (a non-peer reviewed Nature web journal) on the same topic, published just a day after the NEJM article. That article did a similar study on a different set of patients and got an AUC of 0.594 for the combined model and 0.557 for the Gail model.
While I agree that AUC is a good measure for the effectiveness of a test, when you actually apply one in real life, you need often to decide at what score you will actually take action for a patient. That requires picking a cutoff to use to classify patients into low and high risk categories. To get a handle on the sensitivity and specificity of a test at a particular cutoff, you need to see the actual ROC curve used to calculate the AUC. Those ROC curves are not published in the NEJM paper, unfortunately. They are present in the Nature Precedings paper, however, as you can see below:

nature precedings roc resized 600If you look at the solid line in the above graph, which is the combined genetic and non-genetic one, you can pick points on that graph and see what the sensitivity and specificity are. Since I don't have the numbers behind the graph, I have to estimate this by eye, but I think it's instructive. For instance, if you look at 60% sensitivity, that point on the graph is roughly at 55% specificity. So that means if you used that cutoff, you'd be identifying 60% of the cancer patients as high risk and 40% of them as low risk. And you'd be identifying 55% of the non-cancer individuals as low risk and 45% of them as high risk. The dotted line marked 'Gail only' would probably be at about 50% specificity at 60% sensitivity, so the combined model is definitely better. At 80% sensitivity you're only getting about 35% specificity for the combined model, however, and even lower for the non-genetic model. You could do similar exercises for different cutoffs, but my main conclusion is that even though there is better risk assessment with a combined score when compared to non-genetic factors alone, these accuracy numbers are relatively unimpressive. If a physician is going to make choices about patient care, I'd imagine they'd want some numbers much better than any of these results, genetic or non-genetic.
On the flipside, however, the main source for optimism here is that there doesn't seem to be much room for improvement in the tests based on family and health history whereas with future discoveries of more genetic risk factors for breast cancer, those results will only get better. There is also the possibility that other kinds of molecular factors, such as epigenetic effects and RNA/protein expression, will also be be useful. What remains to be seen, however, is how long it will take to get results which are not just better in a relative sense, but useful in an absolute sense for doctors and patients.

GET OUR BLOG IN YOUR INBOX

Diagnostic Tests on the Map of Biomedicine

MoBsmCover

Download the ebook based on our popular blog series. This free, 50+ page edition features updated, expanded posts and redesigned, easier-to-read maps. 

FREE Biobanking Ebook

Biobanking Free Ebook
Get this 29 page PDF document on how data science can be used to advance biorepositories.

 Free NGS Whitepaper

NGS White Paper for Molecular Diagnostics

Learn about the applications, opportunities and challenges in this updated free white paper. 

Recent Posts