Best-Fit Integrated Model of Global Temperature

The acronym ARIMA stands for “Auto-Regressive Integrated Moving Average.” Random-walk and random-trend models, autoregressive models, and exponential smoothing models (i.e., exponential weighted moving averages) are all special cases of ARIMA models. An ARIMA model is classified as an “ARIMA(p,d,q)” model, where the current value y is determined by:

* p — the number of lagged terms (AR),
* d — the number of integrations, and
* q — the number of moving average terms (MA).

Here is a good introduction to ARIMA modelling. Normally, models are either only-AR or only-MA terms, because including both kinds of terms in the same model can lead to overfitting of the data and non-uniqueness of the coefficients.

Paul_K writes

VS proposed an ARIMA (3,1,0) or AR model. B&V concluded an ARIMA((0,1,2) or MA model. I downloaded the GISSTEMP dataset (J-D average), and force fitted both models using a brute-force non-linear solver to minimize RSS in both instances. (I also tested and confirmed VS’s rejection of a drift term in the AR model, not only for his three term solution but also for all combinations of a two-lag solution.)

The GISSTEMP dataset, 1880 to 1909 consists of 130 datapoints with a variance of 0.062388. The best-fit ARIMA(3,1,0) model looks like this(the lower plot being the residual error):

image002

For comparison, the B&V model looks like this:

image004

The RSS for the best-fit VS model (ARIMA(3,1,0) is 1.4784. This fit uses 3 degrees of freedom for coefficient estimation and 4 degrees of freedom for initial boundary conditions. The adjusted sd of the residual is then 0.1096 – similar to that reported by VS on Bart’s blog.

By comparison, the RSS for the best-fit B&V model (ARIMA(0,1,2)) is 1.3227. This fit uses 4 degrees of freedom for coefficients and 1 degree of freedom for initial boundary conditions, a total of 5. The adjusted sd of the residual is then 0.1029.

Normally comparisons within nested models are statistically straightforward and between models of different structure a much more complicated problem. However, in this instance, the comparison does look straightforward, and, on a purely statistical basis, I would have to conclude that the B&V model is superior.

Equally importantly, a visual inspection of the model matches suggests that the B&V moving average model captures the data character better. More speculatively, although both models produce residuals which do not reject stationarity, non-zero bias nor normality, the VS model retains a high degree of improbable excursion in its residual error function which is not apparent in the MA model proposed by B&V.

Overall, therefore, I am inclined to accept the B&V model (moving average) over the VS (AutoRegressive) model. As I mentioned in my previous post, if my finding is valid (and I openly acknowledge that it is neither complete nor rigorous) this has a significant effect on your a priori assumptions, David.

DS Note: My model was at the same integration order q=1 as yours are, and the conclusion is the same — no need for a non-zero bias or drift term (attributable to global warming). Perhaps I should work it through with those models, but it seems to have already been done.

It is worth reading wiki here on the difference between an AR model called an ‘infinite impulse’ and a MA model called a ‘finite impulse’ model. The finite vs infinite aspect might provide some intuition.

0 thoughts on “Best-Fit Integrated Model of Global Temperature

  1. Hi David,
    You may find it worthwhile to re-examine Equation 6 in the Breusch and Vahid paper: http://cbe.anu.edu.au/research/papers/pdf/wp495.pdf
    They do confirm integration order I(1), but found that the inclusion of a drift term in the difference series (equivalent to a trend in the level series) was significant at the 5% level. This is critically different from the VS model results, and hence the reason I raised my flag about your original post.
    Paul

    • Paul, Looking back at the post I did get a significance of 0.05 on a drift term for post-1960 data. My point in that post was that even though a term potentially attributable to GHG might be detectable (at 95% CL) the magnitude of the contribution is not necessarily high.

      This is a bit like detecting a contaminant in the water supply at 95%CL when the concentration in very low. If concentration was high then the confidence of detection would be much higher as well (eg <99.9%).

      B&V say it much better:

      "Comparing these results with those in equations (1), (2) and (3),
      we see that the evidence for a positive linear trend is much weaker in the presence of a unit root. While the t-statistics of the coefficient of the trend in equations (1), (2) and (3) were around 4, implying a highly significant and precisely estimated positive trend, the t-statistics for the drift terms (the intercepts) in equations (4), (5) and (6) are 1.83, 1.78 and 2.23 respectively. While these are sufficient to reject the null hypothesis of zero drift against a one-sided alternative of a positive drift at the 5% level of significance, they show that there is not as much information about the linear trend in data as implied by equations (1), (2) and (3)."

      Moreover, the coefficients of drift are smaller in 4,5 and 6 than in 1, 2 and 3.

  2. Hi David,You may find it worthwhile to re-examine Equation 6 in the Breusch and Vahid paper: http://cbe.anu.edu.au/research/papers/pdf/wp495…They do confirm integration order I(1), but found that the inclusion of a drift term in the difference series (equivalent to a trend in the level series) was significant at the 5% level. This is critically different from the VS model results, and hence the reason I raised my flag about your original post. Paul

  3. Paul, Looking back at the post I did get a significance of 0.05 on adrift term for post-1960 data. My point in that post was that eventhough a term potentially attributable to GHG might be detectable (at95% CL) the magnitude of the contribution is not necessarily high.This is a bit like detecting a contaminant in the water supply at95%CL when the concentration in very low. If concentration was highthen the confidence of detection would be much higher as well (eg<99.9%).

  4. Hi David, in terms of fits, you are absolutely right. As a matter of fact, the various information criteria (by a narrow margin) also prefer the B&V model.

    Personally, I would still stick with the ARIMA(3,1,0), although, as I said on many occasions, this is a matter of personal taste. My main argument is the impulse response functions the two specifications generate.

    ARIMA(3,1,0)

    ARIMA(0,1,2) no drift:

    Personally, I *believe* that the ARIMA(3,1,0) without drift looks more realistic, although that is for the specialists in the field to decide.

    —————

    Then something on the significances. You are right to stress that the significance level plays a big role here. The actual estimated coefficients differ marginally, and so do their t-values. Again, I guess here’s where ‘theory’ should truly make a difference.

    Also, I would like to point out, that the forecasting errors made differ as well. I have reestimated both specifications on the 1880-1990 sample, and then did a forecast comparison on the remaining 18 observations.

    First the estimation results (coef, estimate, p-value)

    ARIMA(0,1,2):

    C, 0.004821, 0.0619
    MA(1), -0.543172, 0.0000
    MA(2), -0.186241, 0.0597

    ARIMA(3,1,0) no drift:

    AR(1), -0.447609, 0.0000
    AR(2), -0.339916, 0.0007
    AR(3), -0.339732, 0.0005

    (side note: see how the ARIMA(3,1,0) p-values are robust over sample chosen, unlike the MA based specification)

    Forecast comparison (errors in forecast):

    ARIMA(0,1,2):

    Root Mean Squared Error, 0.180718
    Mean Absolute Error, 0.152856

    ARIMA(3,1,0) no drift:

    Root Mean Squared Error, 0.175601
    Mean Absolute Error, 0.148360

    So the ARIMA(3,1,0) is slightly superior to the ARIMA(0,1,2) in this sense. As you can see, it falls both ways.

    —————

    Finally, I honestly wouldn’t mind either model being chosen, as they both capture the (near) unit root character displayed in the temperature series. Furthermore, both specifications invalidate the currently standard approach to trend estimation (and inference!) practiced by most climate scientists today.

    Here’s the graph I’m talking about:

    • correction to ‘impulse response’ section, it’s obviously:

      ARIMA(3,1,0), no drift
      and
      ARIMA(0,1,2), drift

      Best, VS

      • On the impulse response, it looks alot like a damped harmonic motion, overdamped in the ARIMA(0,1,2), i.e. a weight on a spring in air vs a spring in water. Whether a dynamic system is damped or not can be expressed as a single parameter value that is alot like feedback. The ‘truth’ may be somewhere in between.

    • Agree that incorporating the unit root is a the main issue. The B&V paper improves with reading. Of interest is the break-point analysis, showing that with a unit root, the break in 1976 is barely significant. More significant in the deterministic case.

  5. Hi David, in terms of fits, you are absolutely right. As a matter of fact, the various information criteria (by a narrow margin) also prefer the B&V model. Personally, I would still stick with the ARIMA(3,1,0), although, as I said on many occasions, this is a matter of personal taste. My main argument is the impulse response functions the two specifications generate.ARIMA(3,1,0)http://img694.imageshack.us/img694/460/impulser…ARIMA(0,1,2) no drift:http://img511.imageshack.us/img511/212/impulser…Personally, I *believe* that the ARIMA(3,1,0) without drift looks more realistic, although that is for the specialists in the field to decide.—————Then something on the significances. You are right to stress that the significance level plays a big role here. The actual estimated coefficients differ marginally, and so do their t-values. Again, I guess here's where 'theory' should truly make a difference.Also, I would like to point out, that the forecasting errors made differ as well. I have reestimated both specifications on the 1880-1990 sample, and then did a forecast comparison on the remaining 18 observations. First the estimation results (coef, estimate, p-value)ARIMA(0,1,2):C, 0.004821, 0.0619MA(1), -0.543172, 0.0000MA(2), -0.186241, 0.0597ARIMA(3,1,0) no drift:AR(1), -0.447609, 0.0000AR(2), -0.339916, 0.0007AR(3), -0.339732, 0.0005(side note: see how the ARIMA(3,1,0) p-values are robust over sample chosen, unlike the MA based specification)Forecast comparison (errors in forecast):ARIMA(0,1,2):Root Mean Squared Error, 0.180718Mean Absolute Error, 0.152856ARIMA(3,1,0) no drift:Root Mean Squared Error, 0.175601Mean Absolute Error, 0.148360So the ARIMA(3,1,0) is slightly superior to the ARIMA(0,1,2) in this sense. As you can see, it falls both ways.—————Finally, I honestly wouldn't mind either model being chosen, as they both capture the (near) unit root character displayed in the temperature series. Furthermore, both specifications invalidate the currently standard approach to trend estimation (and inference!) practiced by most climate scientists today.Here's the graph I'm talking about:http://www.ipcc.ch/graphics/ar4-wg1/jpg/faq-3-1

  6. Agree that incorporating the unit root is a the main issue. The B&V paper improves with reading. Of interest is the break-point analysis, showing that with a unit root, the break in 1976 is barely significant. More significant in the deterministic case.

  7. On the impulse response, it looks alot like a damped harmonic motion, overdamped in the ARIMA(0,1,2), i.e. a weight on a spring in air vs a spring in water. Whether a dynamic system is damped or not can be expressed as a single parameter value that is alot like feedback. The 'truth' may be somewhere in between.

  8. Meanwhile, in the land of adjusted temperatures, which method gives the best fit to this graph, which I suspect that most of us use mentally, after having read paper after paper about adjustments.

    globalgraphadj

    Create by:

    (a) adding 0.2 deg C to all temperatures before 1950;
    (b) leaving 1951 where it is;
    (c) de-stretching the slope from 1951 to present so the right end hits right now at 0.3 deg C cooler than shown on the parent graph.

    The rationale is that there are many stations where 0.2 deg has been subtracted from the early years; and after 1950, to compensate the slope for the full effect of UHI and questionable gridding assumptions.

    While the proposal is far from robust, it is simple to recalculate and it at least starts to incorporate the reality of over-ambitious adjustment which I think we mostly agree is there.

    I think that some creativity of our own is better than using an official starting graph that is likely to be defective.

  9. Meanwhile, in the land of adjusted temperatures, which method gives the best fit to this graph, which I suspect that most of us use mentally, after having read paper after paper about adjustments.http://i260.photobucket.com/albums/ii14/sherro_…Create by:(a) adding 0.2 deg C to all temperatures before 1950;(b) leaving 1951 where it is;(c) de-stretching the slope from 1951 to present so the right end hits right now at 0.3 deg C cooler than shown on the parent graph.The rationale is that there are many stations where 0.2 deg has been subtracted from the early years; and after 1950, to compensate the slope for the full effect of UHI and questionable gridding assumptions.While the proposal is far from robust, it is simple to recalculate and it at least starts to incorporate the reality of over-ambitious adjustment which I think we mostly agree is there.I think that some creativity of our own is better than using an official starting graph that is likely to be defective.

  10. Oh heck, I’ve just realised that my adjusted graph is just like the residual error from ARIMA(3,1,0).

    Maybe my method was too simple, but I’d reckon on an eyeball correlation of better than 0.9.

    I wonder what this means? Why not subtract the ARIMA residual error from my 2-step adjusted graph to see if we have near enough to a straight, horizontal line?

    • Sherro,
      This took me three readings to understand, but I did eventually fall off my chair laughing.
      With respect to your previous post, on a more serious note, the various adjustments to the temperature series do not appear to be particularly relevant to the conversation about the statistical properties (of surface-area-weighted global surface temperature). VS confirmed the findings of several other authors that a unit root cannot be rejected for any of the published temperature series, and the statistical structure looks similar in each case.

  11. Oh heck, I've just realised that my adjusted graph is just like the residual error from ARIMA(3,1,0). Maybe my method was too simple, but I'd reckon on an eyeball correlation of better than 0.9.I wonder what this means? Why not subtract the ARIMA residual error from my 2-step adjusted graph to see if we have near enough to a straight, horizontal line?

  12. Sherro,This took me three readings to understand, but I did eventually fall off my chair laughing. With respect to your previous post, on a more serious note, the various adjustments to the temperature series do not appear to be particularly relevant to the conversation about the statistical properties (of surface-area-weighted global surface temperature). VS confirmed the findings of several other authors that a unit root cannot be rejected for any of the published temperature series, and the statistical structure looks similar in each case.

  13. I think the contribution of VS is the insistence that the presence of a unit root needs to be taken seriously. Like using OHC http://pielkeclimatesci.wordpress.com/2010/04/2… everything has to be done differently. I have only touched the surface of what it means to discard deterministic models, and adopt an integrated model in the 5 posts. It seems that the previous authors have largely been concerned only with if the increase in temperature is within the limits of natural variation, and not really changed basic model. B&V however explore some of the implications and indicate that much more work needs to be done.

  14. Thank you Paul_K2,

    Sometimes there seems to be room to lighten the discussion. I’m pleased you took it in good humour.

    Yes, I undersood the significance of the work by David and VS on the unit root finding, but I needed a platform to launch the cartoon. As I have explained to David, I’m too stale at my age to contribute original serious math, but not too old to follow most of it. It’s a delight to read some of the developments that have happened since “divs grads and curls” at Sydney Uni 1961, especially the power of philosophy and logic behind numeracy.

  15. Thank you Paul_K2,Sometimes there seems to be room to lighten the discussion. I'm pleased you took it in good humour.Yes, I undersood the significance of the work by David and VS on the unit root finding, but I needed a platform to launch the cartoon. As I have explained to David, I'm too stale at my age to contribute original serious math, but not too old to follow most of it. It's a delight to read some of the developments that have happened since “divs grads and curls” at Sydney Uni 1961, especially the power of philosophy and logic behind numeracy.

  16. One problem I have with B&V is this:

    “A similar problem arises when we look at
    the plots in Figure 1 and then ask questions such as: “Was there a cooling trend
    between 1880 to 1910?”or “Was there a warming trend between 1910 to 1945?”
    or “Was there a cooling trend between 1945 and 1975?”or “Has the trend become
    steeper since 1975?”or “Has the trend disappeared since the extremely hot year
    of 1998?”. The standard statistical tests will overstate the apparent signi…cance of
    these hypothesised events because these questions are not posed exogenously, as
    assumed by the statistical theory, but are instead endogenous results of the local
    behaviour of the very data that are used for testing them.”

    As well as being “endogenous results of the local behaviour of the very data”; aren’t they primarily the result of manifest physical events which have been noted and which have established effects on temperature such as PDO phase shifts. In respect of the 1976 phase shift don’t B&V endorse it:

    “Irrespective of the criterion used to judge the break point, and for all three of
    the data series, the most remarkable break point in the trend stationary models
    is at 1976. In the unit root models, for the T3GL and NCDC the break point
    again is 1976.”

    B&V are doubtful of a 1998 phase shift but would the length of the data would be an issue there?

    • Yes 1976 is the most remarkable, but not very significant when the unit root is considered, only when deterministic models are assumed. 1998 is subject to data length.

      Summarizing a few papers, the 60 years periodic and the temperature increase are the only (marginally) significant features of the 150 year record with the unit root. But if you read my latest post, even that may be overstated, as the apparent periodicity could arise from a random walk ‘pumping the half-pipe’.

      That is, it looks periodic, but the phase is entirely arbitrary. There is nothing exogenous ‘forcing’ or ‘driving’ the period or phase of the oscillation.

  17. One problem I have with B&V is this:”A similar problem arises when we look atthe plots in Figure 1 and then ask questions such as: “Was there a cooling trendbetween 1880 to 1910?”or “Was there a warming trend between 1910 to 1945?”or “Was there a cooling trend between 1945 and 1975?”or “Has the trend becomesteeper since 1975?”or “Has the trend disappeared since the extremely hot yearof 1998?”. The standard statistical tests will overstate the apparent signiÂ…cance ofthese hypothesised events because these questions are not posed exogenously, asassumed by the statistical theory, but are instead endogenous results of the localbehaviour of the very data that are used for testing them.”As well as being “endogenous results of the local behaviour of the very data”; aren't they primarily the result of manifest physical events which have been noted and which have established effects on temperature such as PDO phase shifts. In respect of the 1976 phase shift don't B&V endorse it:”Irrespective of the criterion used to judge the break point, and for all three ofthe data series, the most remarkable break point in the trend stationary modelsis at 1976. In the unit root models, for the T3GL and NCDC the break pointagain is 1976.”B&V are doubtful of a 1998 phase shift but would the length of the data would be an issue there?

  18. Yes 1976 is the most remarkable, but not very significant when the unit root is considered, only when deterministic models are assumed. 1998 is subject to data length. Summarizing a few papers, the 60 years periodic and the temperature increase are the only (marginally) significant features of the 150 year record with the unit root. But if you read my latest post, even that may be overstated, as the apparent periodicity could arise from a random process 'pumping the half-pipe'. That is, it looks periodic, but the phase is entirely arbitrary. There is nothing exogenous 'forcing' or 'driving' the period or phase of the oscillation.

  19. Yes I did David and my initial reaction is that God does not throw dice but is a skateboarder! So, the real action is not the alleged temperature/CO2/natural factors cause/correlation but the restorative process? Miskolczi? Perhaps M doesn’t make sense because he is dealing with a RW?

    • Well M remarks somewhere that his constraints are fairly loose — not a strong restoration to equilibrium. I have to look at the restorative processes in the models etc, because so much of the focus is on the deterministic relationship, that I haven’t paid any attention to it. If you pay attention to the equilibrium point, you miss the dynamic if its actually sitting on a broad plateau.

  20. Yes I did David and my initial reaction is that God does not throw dice but is a skateboarder! So, the real action is not the alleged temperature/CO2/natural factors cause/correlation but the restorative process? Miskolczi? Perhaps M doesn't make sense because he is dealing with a RW?

  21. Well M remarks somewhere that his constraints are fairly loose — not a strong restoration to equilibrium. I have to look at the restorative processes in the models etc, because so much of the focus is on the deterministic relationship, that I haven't paid any attention to it.

  22. Regarding Jeffrey Glassman’s contention that there is are 134 year (+ve) and 46 year (-ve) solar drivers (of modes of amplification as yet poorly understood) in the post-1850 global surface temperature record.

    Jeff advises that his monograph will shortly appear in a much more widely read (and well known) blog than his own, and be available there as a downloadable .pdf file.

    God does not throw dice… but her god does allow her to play in a half-pipe.

    • That’s good news. I have been interested in potential solar amplification mechanisms. I thought that one would need a resonance from the ocean to amplify a small periodic driver, but I realise that is not necessary, as ‘pumping’ could build up the amplitude of a small forcing, in the swings between extremes of disequilibrium.

      • BTW I’ve produced a revised spreadsheet model of the so-called global heat budget along the lines of what I was discussing with Jeff Glassman earlier, making sure, as he had suggested that it ‘fits’ as closely as possible all 6 of the budgets listed or referenced in TF&K09 (i.e. ISCCP-FD, NRA, TF&K09, JRA and the 2 Loeb et al. versions – K&T97 is in there too).

        However all that exercise shows, as Jeff noted, is that the precision with which all these various groups know the key fluxes in the system is obviously far, far less than they fondly imagine.

        FYI in respect of ‘pumping’ it appears from my little model that the minimum ratio OLR/ASR is about 0.90 at the cloud-free (but no ice/snow increase) minimum albedo A ~ 0.068 and maxes out at about 1.10 at the total cloud cover albedo A ~ 0.414. Global Tsurf (min) seems about 8.5 C and Tsurf(max) about 17.6 C.

        Presumably this puts a maximum sensitivity on CO2 of about 2.6 C but would we be sweltering under a total cloud cover? g(max) appears to be about 0.445.

      • Steve, so you model indicates possible disequilibrium of +-10%?
        A simple AR(1) fit to the temperature data gives T(t+1)=0.9*T(t) + e.
        I wonder if there is a connection?

        Your max/min range is about 9 degrees, (am I right this is from the
        +-10% extremes).
        This is about the glacial-interglacial range. Hmmm.

    • I’ve zipped up Jeffrey Glassman’s slightly revised monograph for those who would like a very quick download, here:

      https://download.yousendit.com/THE3a3ZIcHZvQnRFQlE9PQ

      It will be available for download for 14 days. Jeff’s monograph was posted to the CrossFit Journal site because his son Greg Glassman is a famous fitness coach in the US I understand. Personally, I found the download of the pdf from there was a bit problematical for me (with Firefox) but it downloaded OK with IE.

  23. Regarding Jeffrey Glassman's contention that there is are 134 year (+ve) and 46 year (-ve) solar drivers (of modes of amplification as yet poorly understood) in the post-1850 global surface temperature record.Jeff advises that his monograph will shortly appear in a much more widely read (and well known) blog than his own, and be available there as a downloadable .pdf file.God does not throw dice… but her god does allow her to play in a half-pipe.

  24. That's good news. I have been interested in potential solar amplification mechanisms. I thought that one would need a resonance from the ocean to amplify a small periodic driver, but I realise that is not necessary, as 'pumping' could build up the amplitude of a small forcing, in the swings between extremes of disequilibrium.

  25. I've zipped up Jeffrey Glassman's slightly revised monograph for those who would like a very quick download, here:https://download.yousendit.com/THE3a3ZIcHZvQnRF…It will be available for download for 14 days. Jeff's monograph was posted to the CrossFit Journal site because his son Greg Glassman is a famous fitness coach in the US I understand. Personally, I found the download of the pdf from there was a bit problematical for me (with Firefox) but it downloaded OK with IE.

  26. BTW I’ve produced a revised spreadsheet model of the so-called global heat budget along the lines of what I was discussing with Jeff Glassman earlier, making sure, as he had suggested that it ‘fits’ as closely as possible all 6 of the budgets listed or referenced in TF&K09 (i.e. ISCCP-FD, NRA, TF&K09, JRA and the 2 Loeb et al. versions – K&T97 is in there too). However all that exercise shows, as Jeff noted, is that the precision with which all these various groups know the key fluxes in the system is obviously far, far less than they fondly imagine. FYI in respect of 'pumping' it appears from my little model that the minimum ratio OLR/ASR is about 0.90 at the cloud-free (but no ice/snow increase) minimum albedo A ~ 0.068 and maxes out at about 1.10 at the total cloud cover albedo A ~ 0.414. Global Tsurf (min) seems about 8.5 C and Tsurf(max) about 17.6 C.Presumably this puts a maximum sensitivity on CO2 of about 2.6 C but would we be sweltering under a total cloud cover? g(max) appears to be about 0.445.

  27. Steve, so you model indicates possible disequilibrium of +-10%?A simple AR(1) fit to the temperature data gives T(t+1)=0.9*T(t) + e.I wonder if there is a connection?Your max/min range is about 9 degrees, (am I right this is from the+-10% extremes).This is about the glacial-interglacial range. Hmmm.

  28. Glikson appears to go around challenging people to debates; he regularly throws challenges to The Climate Sceptics; I can’t believe he is still flogging the Sherwood and Allen “windshear” paper, one of the all time worst. Jo Nova is certainly impressive.

  29. Glikson appears to go around challenging people to debates; he regularly throws challenges to The Climate Sceptics; I can't believe he is still flogging the Sherwood and Allen “windshear” paper, one of the all time worst. Jo Nova is certainly impressive.

  30. Glikson appears to go around challenging people to debates; he regularly throws challenges to The Climate Sceptics; I can't believe he is still flogging the Sherwood and Allen “windshear” paper, one of the all time worst. Jo Nova is certainly impressive.

  31. Pingback: zobacz oferte

  32. Pingback: wynajem samochodów

  33. Pingback: zobacz oferte

  34. Pingback: zobacz

  35. Pingback: kliknij

  36. Pingback: strona

  37. Pingback: darmowe filmiki erotyczne

  38. Pingback: meskieinfo.soup.io

  39. Pingback: pity 2015 rzeczpospolita - pity2015program.pl

Leave a reply to cohenite Cancel reply