A new temperature reconstruction

In honor of the National Research Council of the National Academies committee to study “Surface Temperature Reconstructions for the Past 1,000-2,000 Years” meeting at this moment, I offer my own climate reconstruction based on the methods blessed by dendroclimatology. The graph below shows reconstructed temperature anomolies over 2000 years, with the surface temperature measurements from 1850 from CRU as black dots, the individual series in blue and the climate reconstruction in black. I think you can see the similarity to other published reconstructions (see here), particularly the prominent ‘hockey-stick’ shape, the cooler temperatures around the 1500s and the Medieval Warm Period around the 1000s. What data did I use? Completely random sequences. Reconstruction methods from dendroclimatology will generate plausible climate reconstructions even on random numbers!

The steps to construction were as follows:

  1. Generate 100 sequences of 2000 random numbers with long term persistant (LTP) stochastic process.
  2. Select sequences with a positive correlation with CRU.
  3. Average the selected series.

The series show a ‘hockey-stick’ pattern due to step 2 – only those random series correlating with temperatures are selected. This step is analogous to only using trees with positive correlation with temperatures. Outside the range of the calibration temperatures the average of the series reverts to the mean value of the random numbers, which in this case is the chosen zero value of the calibration temperatures. This leads to an upward drift in values back through time. The maximum value is entirely arbitrary. Base the anomolies around a different zero value, and the MWP will go higher or lower. LTP is necessary because a similar set of i.i.d. series would be very unlikely to have members that correlate significantly with the CRU temperatures. Around 5% of LTP series were significant in this case due to ‘spurious correlation’.

Here is an attempt at describing the logical implications of this. This demonstration does not necessarily falsify the results of climatic reconstruction using tree-rings. If we take the theory of tree-ring reconstructions as a finding P and C where P is the premise and C is the conclusion, falsification would be a finding like P not C, that is, that using real tree ring data leads to a temperature reconstruction that is in disagreement with reality. Demonstrating the same results with random numbers is a finding like Q and C, essentially an alternative theory for generating the same conclusions. What one would also need is to prove is that P is equivalent Q, that tree-rings series are essentially indistinguishable from random numbers generated from stochastic series with LTP say. Alternatively, to counter the claim that climate reconstructions from tree-rings are based on random numbers, one should show the reconstruction is not be generated from a random series. The data from the calibration period could not be used, as it is already a biased sample. Also, coherence between series would not provide support for P not equivalent Q either, due to spatial correlations between the series.

Advertisements

0 thoughts on “A new temperature reconstruction

  1. #23. There may be no need for similar projects, as recent work by climate scientists such a Jim Hansen of NASA show increasing carbon dioxide is not an ‘indicator’ of global warming. For the full paper read http://www.pnas.org/cgi/content/full/97/18/9875.

    Global warming in the twenty-first century: An alternative scenario. James Hansen, Makiko Sato, Reto Ruedy, Andrew Lacis, and Valdar Oinas

    Abstract

    A common view is that the current global warming rate will continue or accelerate. But we argue that rapid warming in recent decades has been driven mainly by non-CO2 greenhouse gases (GHGs), such as chlorofluorocarbons, CH4, and N2O, not by the products of fossil fuel burning, CO2 and aerosols, the positive and negative climate forcings of which are partially offsetting…

  2. #23. There may be no need for similar projects, as recent work by climate scientists such a Jim Hansen of NASA show increasing carbon dioxide is not an ‘indicator’ of global warming. For the full paper read http://www.pnas.org/cgi/content/full/97/18/9875.

    Global warming in the twenty-first century: An alternative scenario. James Hansen, Makiko Sato, Reto Ruedy, Andrew Lacis, and Valdar Oinas

    Abstract

    A common view is that the current global warming rate will continue or accelerate. But we argue that rapid warming in recent decades has been driven mainly by non-CO2 greenhouse gases (GHGs), such as chlorofluorocarbons, CH4, and N2O, not by the products of fossil fuel burning, CO2 and aerosols, the positive and negative climate forcings of which are partially offsetting…

  3. This is a very important point about the dendro proxies.

    It is possible to determine a significance value for tree rings correlated with temperature. For example, if you have to trawl 20 sites to find one with 95% significance (taking into account the autocorrelation, of course) then you haven’t really found anything.

    Of course since the temperature reconstruction folk don’t even archive the sites they do use, let alone the sites they don’t use, so the opportunity to work out whether or not the sites they have found are actually useful has been lost.

    Your graph is just missing one thing – select the instrumental record with the most extreme warming (which runs right up to the present day, after your proxies have run out) and overlay it with a big, thick, black line over everything else, this helps to draw peoples eye away from any embarrassing downturns near the end as well. Although if the downturn was real bad, you could always just blame it on “anthropogenic” effects and truncate.

  4. This is a very important point about the dendro proxies.

    It is possible to determine a significance value for tree rings correlated with temperature. For example, if you have to trawl 20 sites to find one with 95% significance (taking into account the autocorrelation, of course) then you haven’t really found anything.

    Of course since the temperature reconstruction folk don’t even archive the sites they do use, let alone the sites they don’t use, so the opportunity to work out whether or not the sites they have found are actually useful has been lost.

    Your graph is just missing one thing – select the instrumental record with the most extreme warming (which runs right up to the present day, after your proxies have run out) and overlay it with a big, thick, black line over everything else, this helps to draw peoples eye away from any embarrassing downturns near the end as well. Although if the downturn was real bad, you could always just blame it on “anthropogenic� effects and truncate.

  5. Dave,

    any chance of redoing the graph in greyscale or other monochrome graphics? I would like to publish it in AIG News due out in a few weeks and have 2 days to do the final editing.

    Just a word document and a 300 dpi graphic would be ok.

    Otherwise, excellent

  6. David,

    Can you give the RE and R2 statistics for your “reconstruction”. Does the RE show spurious significance?

    Also, can you publish your code on how you created your hockey stick, then everyone can have a go.

  7. Dave,

    any chance of redoing the graph in greyscale or other monochrome graphics? I would like to publish it in AIG News due out in a few weeks and have 2 days to do the final editing.

    Just a word document and a 300 dpi graphic would be ok.

    Otherwise, excellent

  8. David,

    Can you give the RE and R2 statistics for your “reconstruction�. Does the RE show spurious significance?

    Also, can you publish your code on how you created your hockey stick, then everyone can have a go.

  9. Wonderful! One, if not the most important, question that has come out of the Hockey Stick controversy is whether technique (or methods if you like) drive results. The work of McIntyre and McKitrick showed that the Mann, Bradley and Hughes transformation (giving the “uncentered” principal components) drove results. There are, of course, many ways of reconstructing past temperatures from proxy data. What David has asked here is the direct question: does the use of correlation of proxy and temperature as a criterion for selecting amongst the multitude of proxies drive the results? Apparently the answer to this is also yes. Wonderful is not for the results, but for asking and following up on the direct question.

    I would take a slightly different view of the logic of the conclusions. The premise (‘P’) is really two things: the methods (‘M’) and the data (‘D’). The conclusion (‘C’) is unfortunately not directly falsifiable under the current arrow of time (at least unless we find those ancient Greeks invented an accurate thermometer and keeps meticulous records along with starting the rest of the modern world). The reconstruction is M given D implies C: M| D => C, or M(D) = C. What David has shown is M(rD) = C, where rD is the random data.

    There is also the technical point that David’s C is presumably not identical to any of the well know reconstruction C’s. Thus, a next step is a similarity of conclusion statistic. The _(fill in number) well know reconstructions need to be sampled as to statistics that define their similarities and differences. That work won’t be easy because (I believe) what is desired is a visual similarity statistic. I have tried that in the Hockey Stick, and I could not find any universal single statistic. If you proceed, you may end up having to use a set of statistics without explicit weightings.

    Let me also suggest (while I am handing out work assignments ) that taking the first principal component (from the standard correlation matrix) and looking at it versus the average will be worthwhile. I believe that you will find that the first PC will magnify the hockey stick effect at the end and usually flatten the long tail (shaft) of the “unobserved” period.

  10. David,

    I wish I had – we are reorganising the AIG website and AIG Newsletters are usually posted as PDF’s on the web archives. (AIG is the Australian Institute of Geoscientists) and the website is http://www.aig.asn.au/. I am the poor old editor and have a deadline today for the current issue No 83, Issue 84 is deadlined for 31 April for publication May 2006. I discovered another technical paper had lousy graphics which won’t print out too well and I am scurrying around for alternatives. Holiday Weekend here in Perth, Oz, so some time to get things done.

    Cheers

    Louis

  11. Wonderful! One, if not the most important, question that has come out of the Hockey Stick controversy is whether technique (or methods if you like) drive results. The work of McIntyre and McKitrick showed that the Mann, Bradley and Hughes transformation (giving the “uncentered� principal components) drove results. There are, of course, many ways of reconstructing past temperatures from proxy data. What David has asked here is the direct question: does the use of correlation of proxy and temperature as a criterion for selecting amongst the multitude of proxies drive the results? Apparently the answer to this is also yes. Wonderful is not for the results, but for asking and following up on the direct question.

    I would take a slightly different view of the logic of the conclusions. The premise (‘P’) is really two things: the methods (‘M’) and the data (‘D’). The conclusion (‘C’) is unfortunately not directly falsifiable under the current arrow of time (at least unless we find those ancient Greeks invented an accurate thermometer and keeps meticulous records along with starting the rest of the modern world). The reconstruction is M given D implies C: M| D => C, or M(D) = C. What David has shown is M(rD) = C, where rD is the random data.

    There is also the technical point that David’s C is presumably not identical to any of the well know reconstruction C’s. Thus, a next step is a similarity of conclusion statistic. The _(fill in number) well know reconstructions need to be sampled as to statistics that define their similarities and differences. That work won’t be easy because (I believe) what is desired is a visual similarity statistic. I have tried that in the Hockey Stick, and I could not find any universal single statistic. If you proceed, you may end up having to use a set of statistics without explicit weightings.

    Let me also suggest (while I am handing out work assignments ) that taking the first principal component (from the standard correlation matrix) and looking at it versus the average will be worthwhile. I believe that you will find that the first PC will magnify the hockey stick effect at the end and usually flatten the long tail (shaft) of the “unobserved� period.

  12. David,

    I wish I had – we are reorganising the AIG website and AIG Newsletters are usually posted as PDF’s on the web archives. (AIG is the Australian Institute of Geoscientists) and the website is http://www.aig.asn.au/. I am the poor old editor and have a deadline today for the current issue No 83, Issue 84 is deadlined for 31 April for publication May 2006. I discovered another technical paper had lousy graphics which won’t print out too well and I am scurrying around for alternatives. Holiday Weekend here in Perth, Oz, so some time to get things done.

    Cheers

    Louis

  13. Lovely bit of work here. I think I’d strengthen your point about the implications, however. What you’ve shown is that the methodology applied produces that “hockey stick” even when given inputs with no information content (necessarily, randomness is the converse of information) other than the CRU data.

    In other words, the MBH98 methodology adds no information to the CRU data alone.

  14. Lovely bit of work here. I think I’d strengthen your point about the implications, however. What you’ve shown is that the methodology applied produces that “hockey stick� even when given inputs with no information content (necessarily, randomness is the converse of information) other than the CRU data.

    In other words, the MBH98 methodology adds no information to the CRU data alone.

  15. #18 Yes. I was trying to look up how it was done exactly yesterday and ran out of time. You know the link to mm05 (grl) is broken on climateaudit? I guess the supplementary information in MBH98 will tell me exactly

    #19 See the post “Scale Invariance for Dummies” on this site. Here I have shown all the major recons have high H or LTP.

  16. Before you get too carried away, you should think about how the validation statistics (like RE etc.) are tested for significance (i.e. how well your reconstruction validates against the part of the data you didn’t use in the calibration). What you will find is that your exercise will produce an RE of close to 0 which is the definition of statistically useless. In fact, this excercise is the equivalent of what is done to define that level. Real reconstructions are only useful if they perform better than random noise, and this is something that has been a staple of the field for decades.

  17. #18 Yes. I was trying to look up how it was done exactly yesterday and ran out of time. You know the link to mm05 (grl) is broken on climateaudit? I guess the supplementary information in MBH98 will tell me exactly

    #19 See the post “Scale Invariance for Dummies� on this site. Here I have shown all the major recons have high H or LTP.

  18. Before you get too carried away, you should think about how the validation statistics (like RE etc.) are tested for significance (i.e. how well your reconstruction validates against the part of the data you didn’t use in the calibration). What you will find is that your exercise will produce an RE of close to 0 which is the definition of statistically useless. In fact, this excercise is the equivalent of what is done to define that level. Real reconstructions are only useful if they perform better than random noise, and this is something that has been a staple of the field for decades.

  19. #12. I am not so sure the RE will be zero if the test is on data held back from a small section of the instrumental temperature range, as remember, the series are highly autocorrelated, so the points are in no way independent, even if not used in the calibration. Anyway, there is no point in speculating about something that I can easily test. When I find the exact protocols used I will attempt a replication of that part of the process next week. It should be useful also to show the contribution of the two main parts of the biasing process: 1. selection of series, and 2. the calibration with the linear model. My guess is they contribute about half each to the coercion of the series to fit the message, and if I added RCS it would do the same.

  20. #6 Thanks Martin. Actually I don’t want to sound big-headed but I sent this figure to Steve a few weeks ago thinking it was too trivial to follow up. Then I posted it as a bit of fun. I am glad it has been well received. Just shows what I know. I appreciate all the suggestions for tightening up the argument.

    It is a good suggestion to test the random reconstruction against published series. I would think that if the reconstruction over its entire length fell within the the confidence intervals derived from the standard error of published series, such as shown by the blue shading in the figure of Briffa 2001, then you could be pretty sure that the null hypothesis of no difference between the random reconstruction and the proxy reconstructions would stand.

  21. David, thanks for pointing out the Scale Invariance for Dummies post. If the reconstructions have high LTP, can we infer that the underlying tree-ring data series also have high LTP?

    Also, in this ‘reconstruction’, why is it that the mean of the random numbers is the zero of the calibration period? This is not the case in the reconstructions, is it? Does this have any implications for the upward drift in values back through time that you notice?

  22. #12. I am not so sure the RE will be zero if the test is on data held back from a small section of the instrumental temperature range, as remember, the series are highly autocorrelated, so the points are in no way independent, even if not used in the calibration. Anyway, there is no point in speculating about something that I can easily test. When I find the exact protocols used I will attempt a replication of that part of the process next week. It should be useful also to show the contribution of the two main parts of the biasing process: 1. selection of series, and 2. the calibration with the linear model. My guess is they contribute about half each to the coercion of the series to fit the message, and if I added RCS it would do the same.

  23. #6 Thanks Martin. Actually I don’t want to sound big-headed but I sent this figure to Steve a few weeks ago thinking it was too trivial to follow up. Then I posted it as a bit of fun. I am glad it has been well received. Just shows what I know. I appreciate all the suggestions for tightening up the argument.

    It is a good suggestion to test the random reconstruction against published series. I would think that if the reconstruction over its entire length fell within the the confidence intervals derived from the standard error of published series, such as shown by the blue shading in the figure of Briffa 2001, then you could be pretty sure that the null hypothesis of no difference between the random reconstruction and the proxy reconstructions would stand.

  24. David, thanks for pointing out the Scale Invariance for Dummies post. If the reconstructions have high LTP, can we infer that the underlying tree-ring data series also have high LTP?

    Also, in this ‘reconstruction’, why is it that the mean of the random numbers is the zero of the calibration period? This is not the case in the reconstructions, is it? Does this have any implications for the upward drift in values back through time that you notice?

  25. #16. Yes both good points. I think we can assume the series have LTP, as they are a simple sum, as does the global temperature. But it wouldn’t hurt to test all these formally. As to the zero point, you expect that the random series eventually reverts to the mean, and this is determined from the calibration stage, as the y axis intercept coefficient adjusts the series up or down. This is the explanation for the upward drift as you rightly suggest. I would have to look at what has been done in each paper, but if this is true then there would be a strong correlation between the chosen zero point for the anomolies and the eventual height of the averaged series. This may explain the differences between the height of the MWP in each of the reconstructions.

  26. #16. Yes both good points. I think we can assume the series have LTP, as they are a simple sum, as does the global temperature. But it wouldn’t hurt to test all these formally. As to the zero point, you expect that the random series eventually reverts to the mean, and this is determined from the calibration stage, as the y axis intercept coefficient adjusts the series up or down. This is the explanation for the upward drift as you rightly suggest. I would have to look at what has been done in each paper, but if this is true then there would be a strong correlation between the chosen zero point for the anomolies and the eventual height of the averaged series. This may explain the differences between the height of the MWP in each of the reconstructions.

  27. David: here’s another formula with inteesting practical applications:
    1. Generate 36 series with LTP characteristic of Jacoby-type site chronologies (10 on file);
    2. Pick the 10 most “temperature-sensitive”
    3. Generate another 30 series and pick the most hockey-stick shaped series.
    4. Average the 11 series.

    You get a hockey stick shaped pattern. Now compare to the Jacoby NH reconstruction -it’s about median.

    Now ask Jacoby for the results of the other 26 series. Refused: “As an ex-marine I appeal to the concept of a few good men”.

    Let’s chat about this.

  28. Note to “fallacy alert” (#12)

    Quite often, the calibration/cross-validation stats apply to the final regression step, but not to the proxy selection. Well, to be more precise, we don’t really know whether the proxy selection stage overlaps the cross-validation because all too often the proxy selection occurs “behind closed doors”.

    More strictly, I guess, the proxies should be correlated to a number of local temperature variations, for which the average of the local temperature results in the global temperature variations, but I believe this will yield a remarkably similar result to the one David has presented here.

  29. David, did you edit in the sentence “Base the anomolies around a different zero value, and the MWP will go higher or lower” or did I just miss it earlier?

    Anyway, the reason why I ask is that it seems to me that selecting the zero value to base the anomalies on ought to be added as one of the steps for constructing this series. I refer you to McIntyre-McKitrick (2005), in which a “hockey-stick” is defined as a series in which the mean of the calibration period differs from the mean of the entire series by at least one standard deviation. By this definition, the series you generated here is not a hockey stick, but it is clear that you could come up with a series that fits this definition by arbitrary choice of zero.

  30. #29 Great distinction fragment. I guess the reason I didn’t make it a step is it seems so minor, but it makes a huge difference to the interpretation. I thought “hockey-stick” was anything that had a blade, and then stayed withing the range of the blade throughout the rest of the period. It should be a step and I should make a plot with different zero values.

  31. David: here’s another formula with inteesting practical applications:
    1. Generate 36 series with LTP characteristic of Jacoby-type site chronologies (10 on file);
    2. Pick the 10 most “temperature-sensitive�
    3. Generate another 30 series and pick the most hockey-stick shaped series.
    4. Average the 11 series.

    You get a hockey stick shaped pattern. Now compare to the Jacoby NH reconstruction -it’s about median.

    Now ask Jacoby for the results of the other 26 series. Refused: “As an ex-marine I appeal to the concept of a few good men�.

    Let’s chat about this.

  32. Note to “fallacy alert� (#12)

    Quite often, the calibration/cross-validation stats apply to the final regression step, but not to the proxy selection. Well, to be more precise, we don’t really know whether the proxy selection stage overlaps the cross-validation because all too often the proxy selection occurs “behind closed doors�.

    More strictly, I guess, the proxies should be correlated to a number of local temperature variations, for which the average of the local temperature results in the global temperature variations, but I believe this will yield a remarkably similar result to the one David has presented here.

  33. David, did you edit in the sentence “Base the anomolies around a different zero value, and the MWP will go higher or lower� or did I just miss it earlier?

    Anyway, the reason why I ask is that it seems to me that selecting the zero value to base the anomalies on ought to be added as one of the steps for constructing this series. I refer you to McIntyre-McKitrick (2005), in which a “hockey-stick� is defined as a series in which the mean of the calibration period differs from the mean of the entire series by at least one standard deviation. By this definition, the series you generated here is not a hockey stick, but it is clear that you could come up with a series that fits this definition by arbitrary choice of zero.

  34. #29 Great distinction fragment. I guess the reason I didn’t make it a step is it seems so minor, but it makes a huge difference to the interpretation. I thought “hockey-stick� was anything that had a blade, and then stayed withing the range of the blade throughout the rest of the period. It should be a step and I should make a plot with different zero values.

  35. Can you apply some rigor to this statement:

    “I think you can see the similarity to other published reconstructions (see here), particularly the prominent ‘hockey-stick’ shape, the cooler temperatures around the 1500s and the Medieval Warm Period around the 1000s.”

  36. Can you apply some rigor to this statement:

    “I think you can see the similarity to other published reconstructions (see here), particularly the prominent ‘hockey-stick’ shape, the cooler temperatures around the 1500s and the Medieval Warm Period around the 1000s.�

  37. Yes. The great hockey stick debate , appearing at a right-wing web site near you soon. I guess it just makes people less guilty about their SUVs. Are there similar projects for other indicators of global warming like carbon dioxide or hurricanes ? Sorry, I know hurricanes are out.

  38. Yes. The great hockey stick debate , appearing at a right-wing web site near you soon. I guess it just makes people less guilty about their SUVs. Are there similar projects for other indicators of global warming like carbon dioxide or hurricanes ? Sorry, I know hurricanes are out.

  39. I enjoyed and had fun reading this very clever and simple example of how processes with long-range dependence behave and what the implications may be. I could also say “how nature behaves” as more and more evidence is accumulated that natural processes behave like processes with long-range dependence. Having been involved in research about processes of this type, I think I know this behaviour and its consequences and I also think that most people have not understood it; even people that have published on Hurst have failed to point out the consequences. In several talks I have said that the Hurst behaviour is a bomb in the foundation of climatology, hydrology and perhaps statistics; this means that the edifice has to be rebuilt. In this respect, this article is constructive and also didactic. A first lesson for me is that a good example may be more important than a rigorous theoretical analysis. A second lesson is that it is difficult to deal with (e.g. reconstruct) high uncertainty (and poorly understood) processes, such as the climatic processes. What it impressed me above all, because I did not know, is the way that proxies may have been constructed, if I believe this article and the comments posted here and in Climate Audit. Is this method of choosing the best fit samples (tree rings) really “blessed by dendroclimatology”? Are the samples with bad fit disregarded at all? At least they could be used to estimate the uncertainty of reconstructions, which I guess is very high – and additional to the uncertainty implied by the Hurst behaviour.

  40. I enjoyed and had fun reading this very clever and simple example of how processes with long-range dependence behave and what the implications may be. I could also say “how nature behaves” as more and more evidence is accumulated that natural processes behave like processes with long-range dependence. Having been involved in research about processes of this type, I think I know this behaviour and its consequences and I also think that most people have not understood it; even people that have published on Hurst have failed to point out the consequences. In several talks I have said that the Hurst behaviour is a bomb in the foundation of climatology, hydrology and perhaps statistics; this means that the edifice has to be rebuilt. In this respect, this article is constructive and also didactic. A first lesson for me is that a good example may be more important than a rigorous theoretical analysis. A second lesson is that it is difficult to deal with (e.g. reconstruct) high uncertainty (and poorly understood) processes, such as the climatic processes. What it impressed me above all, because I did not know, is the way that proxies may have been constructed, if I believe this article and the comments posted here and in Climate Audit. Is this method of choosing the best fit samples (tree rings) really “blessed by dendroclimatology”? Are the samples with bad fit disregarded at all? At least they could be used to estimate the uncertainty of reconstructions, which I guess is very high – and additional to the uncertainty implied by the Hurst behaviour.

  41. Demetris: you ask – Is this method of choosing the best fit samples (tree rings) really “blessed by dendroclimatology”? Steve McIntyre on his blog has documented this prevailing view (e.g. http://www.climateaudit.org/?p=365), and has an extrodinary quote by Esper et al. (Cook, Krusic, Peters, Schweingruber) from Tree Ring Res. 2003, p.92, “The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.” Not only does ‘cherry picking’ appear blessed, they are apparently uniquely qualified to perform it.

  42. Demetris: you ask – Is this method of choosing the best fit samples (tree rings) really “blessed by dendroclimatologyâ€?? Steve McIntyre on his blog has documented this prevailing view (e.g. http://www.climateaudit.org/?p=365), and has an extrodinary quote by Esper et al. (Cook, Krusic, Peters, Schweingruber) from Tree Ring Res. 2003, p.92, “The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.” Not only does ‘cherry picking’ appear blessed, they are apparently uniquely qualified to perform it.

  43. Pingback: wynajem aut

  44. Pingback: link

  45. Pingback: najrzetelniejsze jedzenie w Krakowie

  46. Pingback: katalogi stron

  47. Pingback: eobligacje.blog.pl

  48. Pingback: witryna www

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s