Renewable Energy

Which is better for the environment: renewable energies, oil, gas, coal or nuclear energy? The environmental damage caused by energy sources can be measured by their ‘footprint’ — the area required to produce a specific amount of energy.

An article in Forbes lists the energy produced per unit area of major energy sources, from which I have calculated the area required to produce a specific amount of energy.

Source W/m2 m2/W
Biofuels 0.05 20
Wind power 1.2 0.8
Solar PV 6.7 0.15
Natural Gas 27 0.04
Oil 28 0.04
Nuclear 56 0.02
LENR 100 0.01

Table 1. Relative environmental damage from power sources in square meters destroyed per Watt of energy produced (m2/W).

Simple math shows that a gas or oil well has a power density at least 22 times that of a wind turbine as so uses 22 times less area for the same power generation. If damage to the environment was the only concern, oil and gas are 22 times more friendly.

The big environmental saviors are nuclear power which has 1000 times greater power density than biofuels, and so is 1000 times more environmentally friendly, and potentially LENR, low energy nuclear reactions, which could pack even more power into a smaller area due to low shielding requirements.

It could be argued that distributed power sources are more efficient because they are located closer to their sources. In fact, this is not the case as a source with low power density requires more resources for transmission lines and storage, reducing the economic viability and potential to scale.

It is obvious from this basic analysis, and has been shown from experience, renewable energies such as biofuels, wind and solar are bad for native fauna and flora.

The inevitable conclusion is that advocates of renewable energy do not care about the environment.

The progress of civilization is characterized by the utilization of denser energy sources. The environment has benefited from the reservation of larger areas of land in their natural state. The inclination to dispersed energy sources is a form of neo-Luddism — opposition to any modern technology.

Shaviv and Pielke on Climate Science in 2011

Nir Shaviv is an astrophysicist who wrote some of the more interesting studies showing the role of Gamma Ray Flux (GRF) on climate change, now belatedly being acknowledged by the climate establishment.

He gives some advice to students here: Stay away from Climate Science until you are tenured or retired!

My point is that because climate science is so dogmatic students do risk burning themselves because of the politics, if they don’t follow the party line. Since doing bad (“alarmist”) climate science is not an option either, I advise them to do things which are not directly related to global warming. (In fact, all but one of the graduate students I had, work or worked on pure astrophysical projects). I, on the other hand, have the luxury of tenure, so I can shout the truth as loud as I want without really being hurt.”

As shown by Roger Pielke Jr.’s revelation of how GRL has given up all pretense of due process, in its review of a manuscript on tropical cyclone frequency or intensity addressing the misrepresentations of increased damages due to climate change.

Cyclones were among those misrepresentations made by Chief Scientist Prof. Chubb in front of the Senate Inquiry.

UPDATE: ACM provided a transcript from Hansard.

The Cat asks How Credible is the Chief Scientist?, and Judith Sloan suggests it is a long time since he worked as one.

I cant resist this quote, directly contradicted by Pielke and other evidence.

Mr HUSIC: What would those weather events be?
Prof. Chubb: The argument at the moment is that there will be, for example, much more intense cyclones and whatever they are called in the Northern Hemisphere, and more intense rain and flooding. There will be a lot more intense and focused events of that type and that character as the climate changes. That is where the current view is.

Sea level rise projections bias

Sea levels, recently updated with 10 new data-points, reinforce the hiatus described as a ‘pothole’ by Josh Willis of NASA’s Jet Propulsion Laboratory, Pasadena, Calif., who says you can blame the pothole on the cycle of El Niño and La Niña in the Pacific:

This temporary transfer of large volumes of water from the oceans to the land surfaces also helps explain the large drop in global mean sea level. But they also expect the global mean sea level to begin climbing again.

Attributing the ‘pothole’ to a La Nina and the transfer of water from the ocean to land in Australia and the Amazon seems dubious, given many land areas experienced reduced rainfall at the same times, as shown above.

A quadratic model of sea-level indicates deceleration is now well-established and highly significant, and if present conditions continue, sea level will peak between 2020 and 2050 between 10mm and 40mm above present levels, and may have stopped rising already.

Reference to a ‘pothole’ in a long-term trend caused by short-term La Nina, while ignoring statistically significant overall deceleration, is another example of bias in climate science.

Call:
lm(formula = y ~ x + I(x^2))

Residuals:
Min 1Q Median 3Q Max
-8.53309 -2.39304 0.03078 2.45396 9.17058

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.264e+05 3.517e+04 -6.438 7.40e-10 ***
x 2.230e+02 3.513e+01 6.348 1.21e-09 ***
I(x^2) -5.490e-02 8.772e-03 -6.258 1.98e-09 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.448 on 222 degrees of freedom
Multiple R-squared: 0.9617, Adjusted R-squared: 0.9613
F-statistic: 2786 on 2 and 222 DF, p-value: < 2.2e-16

And the code:

figure5<-function() {
x <- time(SL); y <- SL
l=lm(y ~ x+I(x^2))
new <- data.frame(x = 1993:2050)
predict(l, new, se.fit = TRUE)
pred.w.clim <- predict(l, new, interval="confidence")
matplot(new$x,pred.w.clim,lty=c(1,2,2), type="l", ylab="Sea Level",main="Quadratic Projection of Sea Level Rise",ylim=c(-10,100),lwd=3,col=c(2,2,2),xlab="Year")
lines(SL)
}

Is Bill Gates Rossi's Customer?

Rossi revealed on his blog site that The Customer is a person and not a corporation.

Andrea Rossi
September 24th, 2011 at 10:46 AM
Dear Simon Knight:
By half October we will explain exactly what follows:
1- where the 1 MW plant will be tested
2- all the (not confidential) characteristics of the 1MW plant (the complementary part is more reactors, of a new type that in the meantime we have developed)
3- possibly, who is the Customer, if the Customer will allow us to communicate his name.
The measurements will be made, as I said, by world top class scientists.
I can confirm that we are well in schedule, therefore all will happen in October.
I also confirm that in November we will start our commercial strategy.
Warm Regards,
A.R.

Which made me wonder, could the person be Bill Gates? He has previously written an article at Huff Post titled Why We Need Innovation, Not Just Insulation stating what every rational person knows about current strategies of renewables and efficiency improvements:

No amount of insulation will get us there, only innovating our way to essentially 0-carbon energy technology will do it. If we focus on just efficiency to the exclusion of innovation, or imagine that we can worry about efficiency first and worry about energy innovation later, we won’t get there.

He goes then drops a hint about projects.

To achieve the kinds of innovations that will be required I think a distributed system of R&D with economic rewards for innovators and strong government encouragement is the key. There just isn’t enough work going on today to get us to where we need to go.

Under the Labor-Green Carbon Tax legislation the renewable energy sector will receive over $13 billion in funding, including $10 billion for the establishment of the Clean Energy Finance Corporation (CEFC), which will invest in renewable energy and energy efficiency projects and technologies, and $3.2 billion for the establishment of the Australian Renewable Energy Agency, for research and development into renewable technologies.

But the dim bulbs running the country determine that nuclear technologies, which Rossi’s is, will be excluded from the fund.

Not that it will matter if Rossi’s demo in October works, and someone like Bill Gates is The Customer. There will be an e-Cat on every desktop within 10 years.

Clean Energy Solution on Target for October 2011 Test

Here is Rossi’s 1MW plant consisting of 52 individual E-Cats mounted in a shipping container, reported by NyTechNik.

Above is a video tour of the 1MW plant. A successful trial in October will prove beyond doubt a clean nuclear energy with at least a tenth of the cost of fossil fuels, without emissions of greenhouse gasses.

It will demonstrate, once again, the folly of governments trying to pick winners, such as the billions of dollars directed at renewable energy that will never deliver on its promise.

Rossi developed the technology entirely using his own money.

Compare this with the billions of dollars in renewable research and development, poured down the drain by the Australian Government on behalf of the Australian taxpayer. Most scientists in fields of Low Energy Nuclear Reactions (LENR) have long shut up shop for lack of a small fraction of that amount of money.

If successful it will again prove that like anthropogenic global warming, modern science is motivated by the quest for government grants, discourages new ideas and new theoretical models, and is ruled by powerful special-interests.

See this recent letter from the DoE on cold fusion research:

Monday, September 19, 2011

Dear Mr. Owens:

This is in response to your e-mail message to Secretary Chu dated September 13, 2011 in which you asked to know where the Department of Energy stands on “cold fusion.”

In 1989, a review panel that had been charged by the Department concluded that reports of the experimental results of excess heat from calorimetric cells did not present convincing evidence that useful sources of energy will result from the phenomena attributed to “cold fusion.” To quote the panel, “Hence, we recommend against the establishment of special programs or research centers to develop cold fusion.”

In 2004, the Department organized a second review of the field and that review reached essentially the same conclusion as the 1989 review. The Department’s Office of Sciences does not provide any funding support for “cold fusion” research.

Al Opdenaker

Cold fusion is still laughed at by the entire establishment science. Of course if the test in October turns out to be flop, the joke will be on me. I think I can handle that. If not, then the entire establishment science from the Royal Academies down needs a close look to see whether they provide anything of value.

How ironic if the free market delivers a viable, cheap, clean energy solution at no cost to humanity, when the social democrats whose reason-to-be is to save us from the market, as the Prime Minister revealed, have with billions of taxpayer dollars done nothing, nada, zilch.

See background article here.

Phase Shift in Spencer's Data

It was shown here that the phase shift between total solar irradiance and global temperature is exactly one quarter of the solar cycle, 90 degrees, or 2.75 years. This is a prediction of the accumulation theory described here and here that shows how solar variation can account for paleo and recent temperature change.

Phase shifts in the short-wave (SW) side of the climate system are erroneously attributed to ‘thermal inertia’ of the ocean and earth mass, and called ‘lags’, or regarded as non-existent. If thermal inertia was responsible, then a larger mass would show a larger lag. In fact, an exactly 90 degrees shift emerges directly from the basic energy balance model, C.dT/dt=F, as I will show later.

A 90 degree shift is also present on the long-wave (LW) at the annual time-scale using Spencer’s dataset. This cannot be a coincidence, and gives an important insight into the dynamics of the climate system.

First off is understanding how shifts arise.

The figure above shows an impulse (black) based on a cosine function with a 2*pi period, with its scaled derivative (green) and integral (red). Time is on the x axis.

The impulse in black represents any sudden change in forcing in the atmosphere that ’causes’ the derivative and integral responses (as they are derived directly from the impulse).

Note two things: (1) the peak of the derivative leads the peak of the impulse, and the peak of the integral lags the impulse. (2) The lead and lag are exactly one quarter of the period (2*pi/4 or 1.57 radians) of the cosine impulse. Note (3) the integral ‘amplifies’ the impulse, the mechanism responsible for high solar sensitivity in the accumulative theory.

Cross-correlation (ccf in R) of two variables gives precise information about the phase shifts, their size and significance. Above is the cross-correlation of the derivative and integral with the impulse above, with significance as blue lines. You can read off the phase shift from the first peak location.

The data from Spencer consists of satellite measurements of the short-wave and long-wave intensities at the top of the atmosphere, both for clear sky and cloudy skies. Below is the cross-correlation of each of these variables against his global temperature HadCRUT3 column.

The peaks of correlation show a three month phase shift on the LW and SW_clr components. The LW peaks are positive and the SW peaks are negative due to the orientation of flux in the dataset.

The LW peaks (LW_tot and LW_cls) are affected by the sharp peak at zero lag, probably due to fast radiant effects (magenta line SW_clr), shown in the similar graphic of these data by P.Solar here, mentioned in this thread at CA.

The LW and SW_clr components lead the global surface temperature. There are three possible explanations:

1. Changes in cloud cover actually do drive changes in global temperature due to gamma-ray flux (GRF) or other effects, or

2. The changes in cloud cover are caused by changes in global temperature, with the derivative mechanism described above.

3. Both 1 and 2.

Spencer argues that it is impossible to distinguish between 1 and 2. Both Spencer and Lindzen both consider the lags important because correlation is greatly improved (and determines whether feedback is positive to negative). Neither seem to have mentioned the 3 month phase relationships emerging from integral/derivative system dynamics.

I can’t see how it is possible perform a valid analysis without this insight.

Here is the code.

figure0<-function() {
x=2*pi*seq(-1,1,by=0.01);x2=2*pi*seq(-0.5,0.5,by=0.01)
x1=c(rep(0,50),cos(x2),rep(0,50))
png("impulse.png");
dx=as.numeric(scale(c(0,diff(x1))));sx=as.numeric(scale(cumsum(x1)))
plot(x,x1,ylab="Magnitude",ylim=c(-2,2),lwd=5,xlab="Radians",main="Derivative and Integral of an Impulse",type="l")
lines(c(-2*pi/4,-2*pi/4),c(-2,2),col="gray",lty=2)
lines(c(0,0),c(-2,2),col="gray",lty=2)
lines(c(2*pi/4,2*pi/4),c(-2,2),col="gray",lty=2)
lines(x,sx,col=2,lwd=3)
lines(x,dx,col=3,lwd=3)
text(c(-2*pi/4,0),c(1.5,1.5,1.5),c("f'(t)","f(t)=cos(t)"))
text(2*pi/4,1.5,expression(paste("u222B",f(t))))
dev.off()
browser()
png("cross.png");
cxd=ccf(dx,x1,lag.max=100,plot=F)
cxs=ccf(sx,x1,lag.max=100,plot=F,new=T)
w=2*pi*cxd$lag/(100)
plot(w,cxs$acf,col=2,type="h",xlab="Radians",ylab="Correlation")
lines(w,cxd$acf,col=3,type="h")
lines(c(-100,100),c(0.15,0.15),lty=2,col=4)
lines(c(-100,100),c(-0.15,-0.15),lty=2,col=4)
lines(c(-100,100),c(0,0))
dev.off()
}

figure3<-function() {
par(mfcol=c(1,1),mar=c(4,4,3,3))
figure3.1(spencer[,7],spencer[,1:6],xlim=1)
#par(mar=c(4,4,0,3))
#figure3.1(dess[,5],dess[,1:4],xlim=1)
}

figure3.1<-function(X,data,lag=10,xlim=10) {
png("impulse.png");
plot(c(-100,100),c(0,0),xlim=c(-xlim,xlim),ylim=c(-0.5,0.5),type="l",xlab="Years",ylab="Correlation",main="Cross-correlation of SW and LW with Global Temperature")
lines(c(-100,100),c(0.18,0.18),lty=2,col=4)
lines(c(-100,100),c(-0.18,-0.18),lty=2,col=4)
lines(c(0.25,0.25),c(-1,1),lty=3)
lines(c(-0.25,-0.25),c(-1,1),lty=3)
send=tsp(data)
labels=colnames(data)
t=window(X,start=send[1],end=send[2])
for (i in 1:dim(data)[2]) {
cxd=ccf(data[,i],t,lag.max=lag,plot=F)
w=cxd$lag
lines(w,cxd$acf,col=i+1,lwd=2)
text(0.9,cxd$acf[length(w)],labels[i],col=1,cex=0.5)
}
dev.off()
}

FFT of TSI and Global Temperature

This is the application of the work-in-progress Fast Fourier Transform algorithm by Bart coded in R on the total solar irradiance (TSI via Lean 2000) and global temperature (HadCRU). The results show (PDF) that the atmosphere is sufficiently sensitive to variations in solar insolation for these to cause recent (post 1950) warming and paleowarming.

The mechanism, suggested by the basic energy balance model, but confirmed by the plots below, is accumulation. That is, global temperature is not only a function of the magnitude of solar anomaly, but also its duration. Small but persistent insolation above the solar constant can change global temperature over extended periods. Changes in temperature are proportional to the integral of insolation anomaly, not to insolation itself.

The figure below is the smoothed impulse response resulting from the Fourier analysis using TSI and GT. This is the simulated result of a single spike increase in insolation. The result is a constant change, or step in the GT. This is indicative of a system that ‘remembers shocks’, such as a ‘random walk’. Because of this memory, changes in TSI are accumulated. (Not sure why its negative.)

Below is the Bode plot of the TSP and GT data (still working on this). The magnitude response shows a negative, straight trend, indicative of an accumulation amplifier. This is also consistent with the spectral plots of temperature that cover paleo timescales in Figure 3 here.

Bart’s analysis is going to be very useful doing this sort of dynamic systems analysis in a very general way. Up to now I have been using spectral plots and ARMA models.

This analysis above is an indication of the robustness of the method, as it gives a different but appropriate result on a different data set. Its going to be a very useful tool in arguing that the climate system is not at all like its made out to be.

I will post the code when its further along.