In this post, long overdue, I'll address Setterfield's paper which he also labels as “Response to Tom Bridgman, part II”, available here.
As we'll see, this paper is a very weak response to my complaints. In the process, Setterfield provides even more examples of why his claims qualify as pseudoscience.
In Section II, under 'The Zero Point Energy and the Redshift', Setterfield again raises the issue of William Tifft's claims of quantized redshifts. I've covered some of the issues with redshift quantization when discussing John Hartnett's use of this claim (see John Hartnett's Cosmos 1, 2.). Providing more concrete examples of the problems are planned for a future post.
One of the key ingredients for any science to be reproducible is that other researchers must be able to use any mathematical techniques defined to test and expand on the research. I'll use the example Setterfield uses in the sections 'The Zero Point Energy and atomic constants' and 'The Atomic age of our galaxy'.
First, take a look at an abbreviated version of equation 9:
c ~ (1+z) = (1+T)/sqrt(1-T^2).
The variable T represents the fraction of time since creation to today, or the scaled time. T=0 corresponds to today, while T=1 corresponds to the time of creation of the universe in Setterfield's model. Note that by this equation, the redshift value, z, is zero today and infinite at the time of the creation of the universe. Okay so far. This is the one thing I mention in Setterfield G that Setterfield actually fixed, by eliminating the problems created by the trailing '-1' from this equation.
But this doesn't solve the problems I raise later in Setterfield G.
Setterfield mentions that the '~' symbol means 'proportional to'. To be more definite, it means there is a simple constant that relates the two quantities. We'll use the symbol K for this constant, the same symbol Setterfield uses later in equation 13. So equation 9 becomes
c = K(1+z) = K(1+T)/sqrt(1-T^2)
We can choose our units in something convenient. For example, if we let our unit of time be one year and the unit of distance be one lightyear, then the speed of light today is (1 lightyear/1 year) = 1. This has the net effect of casting our speed-of-light measurements into units relative to the speed-of-light today.
What is the value of K for these units? That's easy to determine from the data Setterfield has provided. For today, using the units defined above, c=1 and z=0 so equation 9 becomes
1 = K(1+0)
which is only satisfied if K=1. Therefore, equation 9 can be written
c = 1+z = (1+T)/sqrt(1-T^2)
Next, we examine Setterfield's equation 10, the lookback time, which is just the distance the light has traveled if emitted at the scaled time, T and received today. He states that this is the integral of equation 9, but he doesn't tell the complete story. He writes the equation as
t = K*[arcsin(T) - sqrt(1-T^2) -1]
where K* is yet another constant. He solves for K* by matching that the total lookback time to creation of the galaxy to have an apparent age of about 12.3 billion years. With this constraint, he obtains
K* = 4.7846e9 lightyears or years
Using some additional manipulations, which are unclear, he determines the value of K in equation 12
K = 1.780e6
Now look at Setterfield's use of K in equation 13
c* = K(1+z)
which reveals that what Setterfield is calling c* is what we call 'c' above.
But now, according to Setterfield, K is not equal to one! To get another clue, let's use Setterfield's equation 13 to determine the speed of light today, when z=0. We get:
c* = 1.780e6(1+0) = 1.780e6 = 1,780,000.0
or, according to Setterfield, the speed of light today is nearly 2 million times faster than the speed of light today???!!! This is an internal contradiction in Setterfield's theory. It generates nonsensical values in a location where we have reliable measurements!
What gives??
The fatal error, which Setterfield has been repeatedly evading, is that in the transition of integrating equation 9 to generate equation 10, the constant 'K' must be equal to 'K*'. Setterfield never acknowledges this fact and here demonstrates a significant effort to hide it.
This article is clearly not a response to me as Setterfield does not address any of the issues I raised in “Setterfield G”, he merely tries to hide it behind another layer of obfuscation. This is a 'sleight of math' maneuver that is highly suspicious because it suggests Setterfield knows his claims are in deep trouble and is trying to hide the error. The alternative is that Setterfield really has no clue what he is doing. Either way, with the lousy math skills he's demonstrated, I would not want Mr. Setterfield doing my tax accounting!
Here is another example of how, as with many other pseudosciences, if they provide any kind of mathematical model at all, it is a model that is useless for research by anyone else. This makes it impossible to become 'accepted science'. In the process of solving problems 'out there' where we have poor measurements, flawed models generate nonsensical results in areas where we have good measurements!
I'll save comments on the "Origin of the Elements" section for a future post.
This site is the blogging component for my main site Crank Astronomy (formerly "Dealing with Creationism in Astronomy"). It will provide a more interactive component for discussion of the main site content. I will also use this blog to comment on work in progress for the main site, news events, and other pseudoscience-related issues.
Thursday, December 31, 2009
Friday, December 18, 2009
Setterfield & c-Decay: "Reviewing a Plasma Universe with Zero Point Energy"
Okay, I'm finally getting back to some young-universe creationist issues. I had been working on these months ago when I was diverted by Don Scott's presentation at GSFC.
Here I'll do a further examination of Barry Setterfield's “Reviewing a Plasma Universe with Zero Point Energy”. My occasional co-author Jerry Jellison has written some material related to this monograph already (See Critique of Some New Setterfield Material). I had written an earlier criticism on Setterfield's math (”Setterfield G”) and data selections (”Barry Setterfield joins the Electric Cosmos”, “Setterfield Again...”). Since I've spent much of the past two years examining Electric Universe (EU) claims, I'll be leveraging much of that background here as I explore some of Setterfield's attempt to integrate his c-decay into an electric universe (EU)/plasma cosmology (PC) framework.
Section I of Setterfield's work is largely a primer on plasmas in space. One thing interesting is there is a heavy reliance on sources at the Thunderbolts site which is largely an assembly of EU supporters. Other large sections of this history seem to be based off material that is also available from Wikipedia (compare to "Wikipedia: Birkeland Current").
Setterfield tries to invalidate mainstream astronomy through many of the EU claims of discoveries of Birkeland currents in space. Yet all of observations EU (an Setterfield) document as evidence that electric fields can be a significant driver are consistent with mechanisms known back in the 1920s, such as rotating magnetic dipoles of the geomagnetic field, and ionospheric mechanisms forming double layers by gravity gradients, the Pannekoek-Rosseland field (See ”The REAL Electric Universe”). EU supporters claim these discoveries are evidence of their more outrageous claims such as the Sun powered by external electric currents or galaxies formed by interacting Birkeland currents, much the same way as young-universe creationists will try to leverage archeological discoveries of some city mentioned in the Bible as evidence of their interpretations of Genesis.
Now for an examination of some errors which reveal the poor quality of Setterfield's scholarship:
page 3: Setterfield claims that the critical ionization velocity is the same as the Alfven velocity. “This “critical ionization velocity” was predicted to be in the range of 5 to 50 kilometers per second. In 1961 this prediction was verified in a plasma laboratory, and this cloud velocity is now often called the Alfvén velocity.”
Incorrect. Alfven velocity is speed of an Alfven wave (Wikipedia: Alfven Wave).
pages 7: on the formation of double layers, Setterfield claims “In general, the oppositely charged DL are usually maintained their electric potential difference is balanced by a compensating pressure which may have a variety of origins.”
In the configuration he describes in the preceding paragraph, the 'compensating pressure' is a gravity gradient, the Pannekoek-Rosseland field mentioned above. In this configuration, the number of high-energy particles generated by the field is small compared to the large number of particles required to establish and maintain the field.
page 7: quoting Peratt, “the metal-to-hydrogen ratio should be maximum near the center and decrease outwardly”.
Since the current streams of Peratt's galaxies give the system axial symmetry, this direction would be radially outward in the galactic disk. This is inconsistent with observations of galaxies as Population II stars, with low metallicity, occupy the bulge (center) and halo of galaxies, while Population I stars, with high metallicity occupy the disk (and spiral arms). In reality, the abundance gradient shows high metallicity in the galactic disk, with decreasing metallicity above and below the disk (See Wikipedia: metallicity).
pg 18-19: The problems here are covered in my post “Setterfield G”
page 23: “Closely linked with the formation of compressed plasma cores of galaxies are the oldest group of stellar objects, the Population II stars.”
This is wrong for the same reasons as Setterfield's claims on page 7 are wrong.
page 22: “quasars from the earliest epochs with redshifts around z=6.5 or greater, show the same iron abundance as pertains at present.”
Setterfield's own references do not support his interpretation. While the Thompson et al. reference discusses the observations, the Kashlinsky reference suggests detection of emission of residual infrared radiation from Population III stars (formed from the initial hydrogen and helium of the Big Bang) that would produce the needed metals in limited regions. Models suggest that the lack of elements with Z>2 makes Population III stars much more massive than Population I or II stars.
page 22: “a 50 million degree ignition temperature is easily achieve with a mere 4308.7 eV with no restriction on which elements may be formed.”
Setterfield suggests this claim comes from pp 105-107 of Don Scott's “The Electric Sky”, but I can't find it.
If you compute the Coulombic barrier for two protons you get about 125 keV, far higher than the 4.3 keV kinetic energy Setterfield specifies above. If you start with the simplest, most stable form of matter, protons and electrons (balanced for charge neutrality), the only way you can get this element formation start in classical physics is to use quantum tunneling. However, we already know this process comes into play at the 17 million degrees corresponding to the temperature at the center of one solar mass of hydrogen and helium, such as the Sun. Setterfield's element formation mechanism becomes redundant.
He could avoid the coulomb barrier penetration by starting with neutrons, but then he has only minutes for them to build nuclei before they decay. Even worse is Setterfield has this mechanism operating in the early universe, on his extremely accelerated atomic time scale while the collision rate between the particles operates on his slower dynamical time scale.
Without a more definitive reference with details to examine, this piece of information appears to be more the product of wishful thinking than physics.
References
* Estimate the height of the coulombic barrier between two protons. If this were the mean thermal energy of an electron-proton gas, what is its temperature?
Coming up: a look at “Data and Creation: The ZPE-Plasma Model“, AKA “Response to Tom Bridgman, part 2”
Update: January 28, 2014: Fixed broken links.
Here I'll do a further examination of Barry Setterfield's “Reviewing a Plasma Universe with Zero Point Energy”. My occasional co-author Jerry Jellison has written some material related to this monograph already (See Critique of Some New Setterfield Material). I had written an earlier criticism on Setterfield's math (”Setterfield G”) and data selections (”Barry Setterfield joins the Electric Cosmos”, “Setterfield Again...”). Since I've spent much of the past two years examining Electric Universe (EU) claims, I'll be leveraging much of that background here as I explore some of Setterfield's attempt to integrate his c-decay into an electric universe (EU)/plasma cosmology (PC) framework.
Section I of Setterfield's work is largely a primer on plasmas in space. One thing interesting is there is a heavy reliance on sources at the Thunderbolts site which is largely an assembly of EU supporters. Other large sections of this history seem to be based off material that is also available from Wikipedia (compare to "Wikipedia: Birkeland Current").
Setterfield tries to invalidate mainstream astronomy through many of the EU claims of discoveries of Birkeland currents in space. Yet all of observations EU (an Setterfield) document as evidence that electric fields can be a significant driver are consistent with mechanisms known back in the 1920s, such as rotating magnetic dipoles of the geomagnetic field, and ionospheric mechanisms forming double layers by gravity gradients, the Pannekoek-Rosseland field (See ”The REAL Electric Universe”). EU supporters claim these discoveries are evidence of their more outrageous claims such as the Sun powered by external electric currents or galaxies formed by interacting Birkeland currents, much the same way as young-universe creationists will try to leverage archeological discoveries of some city mentioned in the Bible as evidence of their interpretations of Genesis.
Now for an examination of some errors which reveal the poor quality of Setterfield's scholarship:
page 3: Setterfield claims that the critical ionization velocity is the same as the Alfven velocity. “This “critical ionization velocity” was predicted to be in the range of 5 to 50 kilometers per second. In 1961 this prediction was verified in a plasma laboratory, and this cloud velocity is now often called the Alfvén velocity.”
Incorrect. Alfven velocity is speed of an Alfven wave (Wikipedia: Alfven Wave).
pages 7: on the formation of double layers, Setterfield claims “In general, the oppositely charged DL are usually maintained their electric potential difference is balanced by a compensating pressure which may have a variety of origins.”
In the configuration he describes in the preceding paragraph, the 'compensating pressure' is a gravity gradient, the Pannekoek-Rosseland field mentioned above. In this configuration, the number of high-energy particles generated by the field is small compared to the large number of particles required to establish and maintain the field.
page 7: quoting Peratt, “the metal-to-hydrogen ratio should be maximum near the center and decrease outwardly”.
Since the current streams of Peratt's galaxies give the system axial symmetry, this direction would be radially outward in the galactic disk. This is inconsistent with observations of galaxies as Population II stars, with low metallicity, occupy the bulge (center) and halo of galaxies, while Population I stars, with high metallicity occupy the disk (and spiral arms). In reality, the abundance gradient shows high metallicity in the galactic disk, with decreasing metallicity above and below the disk (See Wikipedia: metallicity).
pg 18-19: The problems here are covered in my post “Setterfield G”
page 23: “Closely linked with the formation of compressed plasma cores of galaxies are the oldest group of stellar objects, the Population II stars.”
This is wrong for the same reasons as Setterfield's claims on page 7 are wrong.
page 22: “quasars from the earliest epochs with redshifts around z=6.5 or greater, show the same iron abundance as pertains at present.”
Setterfield's own references do not support his interpretation. While the Thompson et al. reference discusses the observations, the Kashlinsky reference suggests detection of emission of residual infrared radiation from Population III stars (formed from the initial hydrogen and helium of the Big Bang) that would produce the needed metals in limited regions. Models suggest that the lack of elements with Z>2 makes Population III stars much more massive than Population I or II stars.
page 22: “a 50 million degree ignition temperature is easily achieve with a mere 4308.7 eV with no restriction on which elements may be formed.”
Setterfield suggests this claim comes from pp 105-107 of Don Scott's “The Electric Sky”, but I can't find it.
If you compute the Coulombic barrier for two protons you get about 125 keV, far higher than the 4.3 keV kinetic energy Setterfield specifies above. If you start with the simplest, most stable form of matter, protons and electrons (balanced for charge neutrality), the only way you can get this element formation start in classical physics is to use quantum tunneling. However, we already know this process comes into play at the 17 million degrees corresponding to the temperature at the center of one solar mass of hydrogen and helium, such as the Sun. Setterfield's element formation mechanism becomes redundant.
He could avoid the coulomb barrier penetration by starting with neutrons, but then he has only minutes for them to build nuclei before they decay. Even worse is Setterfield has this mechanism operating in the early universe, on his extremely accelerated atomic time scale while the collision rate between the particles operates on his slower dynamical time scale.
Without a more definitive reference with details to examine, this piece of information appears to be more the product of wishful thinking than physics.
References
- K. L. Thompson, G. J. Hill, and R. Elston. Lack of Iron Abundance Evolution in High-Redshift QSOS. Astrophysical Journal, 515:487–496, April 1999. doi: 10.1086/307049.
- A. Kashlinsky, R. G. Arendt, J. Mather, and S. H. Moseley. Tracing the first stars with fluctuations of the cosmic infrared background. Nature, 438:45–50, November 2005. doi: 10.1038/nature04143.
* Estimate the height of the coulombic barrier between two protons. If this were the mean thermal energy of an electron-proton gas, what is its temperature?
Coming up: a look at “Data and Creation: The ZPE-Plasma Model“, AKA “Response to Tom Bridgman, part 2”
Update: January 28, 2014: Fixed broken links.
Sunday, December 13, 2009
A Paper Illustrating More of Crothers' Relativity Errors
Dr. Jason Sharples has published a paper in 'Progress in Physics', “Coordinate Transformations and Metric Extension: a Rebuttal to the Relativistic Claims of Stephen J. Crothers” which points out some of the many strange errors that Stephen J. Crothers makes in his somewhat bizarre interpretation of relativity. I've written some on this topic already (See "Some Preliminary Comments on Crothers' Relativity Claims").
Dr. Sharples exposes Crothers' misstatements in a very pedagogical way, choosing simpler examples, such as 2-dimensional geometry, and applying Crothers' analysis methods. This technique illustrates that Crothers' claims of 'fatal problems for general relativity' are actually problems in Mr. Crothers' interpretation of general relativity.
For example, Mr. Crothers' likes to claim that General Relativity has an internal contradiction because the metric radius in a Hilbert form of the Schwarzschild metric is not equal to the Gaussian curvature (Wikipedia: Gaussian Curvature) of the metric. Dr. Sharples uses the simple example of a spherical line element in a Euclidean (flat) 3-dimensional space to illustrate that these quantities are not equal even in this simplified case and are not required to be equal.
From this introductory example, Sharples dives into Crothers' strange arguments about the Schwarzschild solution.
1) One of the more interesting revelations from Sharples' examination is that Crothers' 'solution' for a spherically-symmetric time-independent system in General Relativity is actually just the Schwarzschild metric, truncated to the region outside the event horizon.
2) Crothers presents the variable, $\alpha$ in his form of the equation, as an arbitrary free parameter. Crothers never bothers to apply the physical constraint that the metric must generate motions consistent with the Newtonian gravitational solutions. Once this constraint is applied, Sharples demonstrates that $\alpha = 2m$ = the Schwarzschild radius!
3) Item (2) becomes even more important when Sharples demonstrates that Crothers' solution is simply the traditional Schwarzschild solution mapped in a different coordinate system. Crothers' infinity of solutions has no physical meaning, just as we can study the Earth in spherical, cylindrical, or cartesian coordinates (whichever is more convenient for the mathematics) with no change in the physical results.
I'd like to thank Dr. Sharples for his work on a very clear and understandable paper. My own GR is a bit rusty and I would have had to spend some time reviewing relativity before I could have prepared a response to Crothers of this quality. I am also very pleased that Dr. Sharples hit at the same fundamental areas where I suspected Crothers was off-base, using simpler real-world geometric cases to expose Crothers' misunderstandings as well as applying the real-world constraints in Crothers' Schwarzschild analysis. They were all on my 'to do list' for a possible response.
Crothers' analysis is seriously flawed. I wonder how Crothers would make his 'interpretation' of the spherically-symmetric solution consistent with the physics needed to make a reliable GPS receiver (see “Scott Rebuttal. I. GPS & Relativity”).
As an aside, I also find it interesting that Mr. Crothers has become aligned with the Electric Universe (EU) advocates. Mr. Crothers' understanding of physics seems to rely on some rather bizarre interpretations of mathematics that keep it disconnected with real physical theories. Yet comparison of mathematical models against observations and/or experiments is a key component of valid science. If Crothers chooses to dismiss such validation, he is admitting that he is not doing science.
Meanwhile, the EU supporters distrust mathematical models, considering the level of excuses I receive when I've tried to find reproducible details on their Electric Sun models. EU seems to rely on what could only be described as electrophilic pareidolia (Wikipedia: Pareidolia) in observations, assuming any filamentary glowing structure must be an electric arc.
Crothers is all mathematics with no experiment.
The Electric Universe is all experiment with no mathematics.
How these two ended up working together is a mystery in itself!
Mr. Crothers has apparently prepared a rebuttal to Sharples, but it was rejected (!!) by 'Progress in Physics' (Crothers is on the editorial board of this publication). I suspect the rebuttal was longer than the new 8-page limit of PiPs new policy. If Mr. Crothers has this response online, I'll be happy to post a LINK to it.
“Experiment without theory is tinkering. Theory without experiment is numerology.“
Both are needed for a successful science.
Dr. Sharples exposes Crothers' misstatements in a very pedagogical way, choosing simpler examples, such as 2-dimensional geometry, and applying Crothers' analysis methods. This technique illustrates that Crothers' claims of 'fatal problems for general relativity' are actually problems in Mr. Crothers' interpretation of general relativity.
For example, Mr. Crothers' likes to claim that General Relativity has an internal contradiction because the metric radius in a Hilbert form of the Schwarzschild metric is not equal to the Gaussian curvature (Wikipedia: Gaussian Curvature) of the metric. Dr. Sharples uses the simple example of a spherical line element in a Euclidean (flat) 3-dimensional space to illustrate that these quantities are not equal even in this simplified case and are not required to be equal.
From this introductory example, Sharples dives into Crothers' strange arguments about the Schwarzschild solution.
1) One of the more interesting revelations from Sharples' examination is that Crothers' 'solution' for a spherically-symmetric time-independent system in General Relativity is actually just the Schwarzschild metric, truncated to the region outside the event horizon.
2) Crothers presents the variable, $\alpha$ in his form of the equation, as an arbitrary free parameter. Crothers never bothers to apply the physical constraint that the metric must generate motions consistent with the Newtonian gravitational solutions. Once this constraint is applied, Sharples demonstrates that $\alpha = 2m$ = the Schwarzschild radius!
3) Item (2) becomes even more important when Sharples demonstrates that Crothers' solution is simply the traditional Schwarzschild solution mapped in a different coordinate system. Crothers' infinity of solutions has no physical meaning, just as we can study the Earth in spherical, cylindrical, or cartesian coordinates (whichever is more convenient for the mathematics) with no change in the physical results.
I'd like to thank Dr. Sharples for his work on a very clear and understandable paper. My own GR is a bit rusty and I would have had to spend some time reviewing relativity before I could have prepared a response to Crothers of this quality. I am also very pleased that Dr. Sharples hit at the same fundamental areas where I suspected Crothers was off-base, using simpler real-world geometric cases to expose Crothers' misunderstandings as well as applying the real-world constraints in Crothers' Schwarzschild analysis. They were all on my 'to do list' for a possible response.
Crothers' analysis is seriously flawed. I wonder how Crothers would make his 'interpretation' of the spherically-symmetric solution consistent with the physics needed to make a reliable GPS receiver (see “Scott Rebuttal. I. GPS & Relativity”).
As an aside, I also find it interesting that Mr. Crothers has become aligned with the Electric Universe (EU) advocates. Mr. Crothers' understanding of physics seems to rely on some rather bizarre interpretations of mathematics that keep it disconnected with real physical theories. Yet comparison of mathematical models against observations and/or experiments is a key component of valid science. If Crothers chooses to dismiss such validation, he is admitting that he is not doing science.
Meanwhile, the EU supporters distrust mathematical models, considering the level of excuses I receive when I've tried to find reproducible details on their Electric Sun models. EU seems to rely on what could only be described as electrophilic pareidolia (Wikipedia: Pareidolia) in observations, assuming any filamentary glowing structure must be an electric arc.
Crothers is all mathematics with no experiment.
The Electric Universe is all experiment with no mathematics.
How these two ended up working together is a mystery in itself!
Mr. Crothers has apparently prepared a rebuttal to Sharples, but it was rejected (!!) by 'Progress in Physics' (Crothers is on the editorial board of this publication). I suspect the rebuttal was longer than the new 8-page limit of PiPs new policy. If Mr. Crothers has this response online, I'll be happy to post a LINK to it.
“Experiment without theory is tinkering. Theory without experiment is numerology.“
Both are needed for a successful science.
Sunday, December 6, 2009
Pseudoscience & 'ClimateGate'
Yet another diversion from creationism issues, but it is still related to pseudo-astronomy and its tactics. The issues are still linked because the underlying physics is the same.
Probably the most complete work I've read on the physics, chemistry, and history of climate change is “The Discovery of Global Warming - A History” by Spencer Weart (American Institute of Physics). But the bottom line on the issue is that the intake and output of every organism alters the chemical composition of its environment, and directly or indirectly, the Earth's climate. These environmental changes can become so extreme that they prove detrimental for the organism itself. Humans are just the most recent organisms in the history of the planet to significantly alter the atmosphere.
This video presents the problem as a risk analysis by a high-school science teacher.
Probably the most disturbing thought is a recent publication suggesting that it may already be too late: Are there basic physical constraints on future anthropogenic emissions of carbon dioxide? by Timothy J. Garrett (21 November 2009). This researcher analyzed the problem from the point of fundamental thermodynamics. I suspect there might be a few parameters he missed, but it suggests tight constraints on the problem.
The latest political stunt in the field of climate change has been dubbed “ClimateGate” by some in the media. You can search for the term to find more of the 'controversy'. (also see Wikipedia: Climatic Research Unit e-mail hacking incident)
Among the fallout of this 'scandal' are demands that all the data and software for analyzing it be made public. Some of these people asking for data need to learn how to use Google.
The fact is much of this data already is already public. NASA has pushed much of the satellite data it has collected into public data archives. Many are freely accessible online, just like must of the astronomical data NASA has collected. Over the past decade, there was a political effort to reduce the availability of that data, which takes some time to correct. Here's a link to some mission-specific data: Climate.
Real Climate is also distributing a growing list of links to data used in climate research. See “RealClimate: Where's the Data”. I suggest Mr. Horner stock up on a few hundred terabyte disk drives if he really wants this data. After all, if you want to uncover a 'scandal', you have to go back to the RAW data. He might want to hire a few (dozen?) programmers with a strong background in numerical methods and scientific data formats, that is, if he really plans to have it re-analyzed.
As for software, here's my short list of the numerous public codes available, mostly oriented towards education.
• pyClimate
• EdGCM
• SourceForge: Climate Model
• Java Climate Model
• NASA/GSFC Open Source Climate Model
Some of these are from my resource list when I used to work with Earth science data. Many of them show up in reasonably intelligent searches on Google.
For those who think the Sun takes the full blame for the warming trends, here's a one-stop resource for most solar data: Virtual Solar Observatory. (I've never understood why claiming the Sun is totally responsible for the current warming trend is regarded as good news by so many. If it were due to the Sun, then there is virtually nothing we can do about it. Consider the impact continued warming would have on water availability to food supplies to eventually the entire economy. If it is the Sun, then as a species, we are so screwed!)
The downside of making too much of code openly available is too many researchers may rely on the exact same algorithm since it is so easily available. Sometimes this is good, but it can be a bad effect as well. Multiple independent researchers generally solve computational problems in different ways. They will argue about the techniques used and compare results (this is evident in some of the 'leaked' emails, which seem to be very conveniently edited - or quote-mined). This diversity in coding actually makes it easier to catch errors as erroneous code or algorithms will stand out more easily. This checking of the algorithms used by others was also a content of the e-mails. The fact that the different models generate such similar trends suggests that, while not perfect, they give a reliable guide. Remember, the codes are just as likely to underestimate the severity of some changes as overestimate.
I am not a climate scientist but a bunch of them work “down the hall“ from me. I know a few others, particularly Bob Grumbine of the More Grumbine Science blog, who gives interesting introductory tutorials on climatology and climate data. Bob was (is? My ISP no longer carries USENET) also a regular contributor on Talk.Origins on topics of creationism and Electric Universe claims.
As for some of the comments in the e-mails? Yes, you always discuss what the detractors may throw at you. Any good scientist, or chess player, tries to plan several moves ahead based on possible responses from their competitors. Science is a competitive endeavor.
I've had discussions with colleagues about journals that appear to have had their editorial boards taken over by creationists and other cranks. Those discussions would certainly read similar to the 'leaked' emails. Is that evidence that the creationists and crackpots are correct?
There have been moves for nearly 20 years now for the scientific process to be more open. It is slow, but it is progressing. There is a move to standardize scientific publications for more reproducibility. This has only become practical recently with the availability of cheaper and larger methods of data storage to save the many stages of revisions scientific software goes through. See The Open Science Project
But science only works when all participants are bound by the same standards and criteria. This is where pseudo-scientists start making excuses - claiming anything from “God did it” to get away from irreproducibility or declaring a distrust of mathematical models - as their exemption.
So when are the climate-change deniers going to reveal their models and data? Or are the latest accusations just a ploy to distract the public's attention from the real issues?
Other nations have gone down this road of denying some aspect of science that challenged their belief system (See Wikipedia: Deutsche Physik, Lysenkoism). Eventually the citizens of those nations pay a heavy price for that ignorance. That these e-mails are viewed as a 'scandal' is an indicator of the sad state of science education in the U.S. and worldwide.
Probably the most complete work I've read on the physics, chemistry, and history of climate change is “The Discovery of Global Warming - A History” by Spencer Weart (American Institute of Physics). But the bottom line on the issue is that the intake and output of every organism alters the chemical composition of its environment, and directly or indirectly, the Earth's climate. These environmental changes can become so extreme that they prove detrimental for the organism itself. Humans are just the most recent organisms in the history of the planet to significantly alter the atmosphere.
This video presents the problem as a risk analysis by a high-school science teacher.
Probably the most disturbing thought is a recent publication suggesting that it may already be too late: Are there basic physical constraints on future anthropogenic emissions of carbon dioxide? by Timothy J. Garrett (21 November 2009). This researcher analyzed the problem from the point of fundamental thermodynamics. I suspect there might be a few parameters he missed, but it suggests tight constraints on the problem.
The latest political stunt in the field of climate change has been dubbed “ClimateGate” by some in the media. You can search for the term to find more of the 'controversy'. (also see Wikipedia: Climatic Research Unit e-mail hacking incident)
Among the fallout of this 'scandal' are demands that all the data and software for analyzing it be made public. Some of these people asking for data need to learn how to use Google.
The fact is much of this data already is already public. NASA has pushed much of the satellite data it has collected into public data archives. Many are freely accessible online, just like must of the astronomical data NASA has collected. Over the past decade, there was a political effort to reduce the availability of that data, which takes some time to correct. Here's a link to some mission-specific data: Climate.
Real Climate is also distributing a growing list of links to data used in climate research. See “RealClimate: Where's the Data”. I suggest Mr. Horner stock up on a few hundred terabyte disk drives if he really wants this data. After all, if you want to uncover a 'scandal', you have to go back to the RAW data. He might want to hire a few (dozen?) programmers with a strong background in numerical methods and scientific data formats, that is, if he really plans to have it re-analyzed.
As for software, here's my short list of the numerous public codes available, mostly oriented towards education.
• pyClimate
• EdGCM
• SourceForge: Climate Model
• Java Climate Model
• NASA/GSFC Open Source Climate Model
Some of these are from my resource list when I used to work with Earth science data. Many of them show up in reasonably intelligent searches on Google.
For those who think the Sun takes the full blame for the warming trends, here's a one-stop resource for most solar data: Virtual Solar Observatory. (I've never understood why claiming the Sun is totally responsible for the current warming trend is regarded as good news by so many. If it were due to the Sun, then there is virtually nothing we can do about it. Consider the impact continued warming would have on water availability to food supplies to eventually the entire economy. If it is the Sun, then as a species, we are so screwed!)
The downside of making too much of code openly available is too many researchers may rely on the exact same algorithm since it is so easily available. Sometimes this is good, but it can be a bad effect as well. Multiple independent researchers generally solve computational problems in different ways. They will argue about the techniques used and compare results (this is evident in some of the 'leaked' emails, which seem to be very conveniently edited - or quote-mined). This diversity in coding actually makes it easier to catch errors as erroneous code or algorithms will stand out more easily. This checking of the algorithms used by others was also a content of the e-mails. The fact that the different models generate such similar trends suggests that, while not perfect, they give a reliable guide. Remember, the codes are just as likely to underestimate the severity of some changes as overestimate.
I am not a climate scientist but a bunch of them work “down the hall“ from me. I know a few others, particularly Bob Grumbine of the More Grumbine Science blog, who gives interesting introductory tutorials on climatology and climate data. Bob was (is? My ISP no longer carries USENET) also a regular contributor on Talk.Origins on topics of creationism and Electric Universe claims.
As for some of the comments in the e-mails? Yes, you always discuss what the detractors may throw at you. Any good scientist, or chess player, tries to plan several moves ahead based on possible responses from their competitors. Science is a competitive endeavor.
I've had discussions with colleagues about journals that appear to have had their editorial boards taken over by creationists and other cranks. Those discussions would certainly read similar to the 'leaked' emails. Is that evidence that the creationists and crackpots are correct?
There have been moves for nearly 20 years now for the scientific process to be more open. It is slow, but it is progressing. There is a move to standardize scientific publications for more reproducibility. This has only become practical recently with the availability of cheaper and larger methods of data storage to save the many stages of revisions scientific software goes through. See The Open Science Project
But science only works when all participants are bound by the same standards and criteria. This is where pseudo-scientists start making excuses - claiming anything from “God did it” to get away from irreproducibility or declaring a distrust of mathematical models - as their exemption.
So when are the climate-change deniers going to reveal their models and data? Or are the latest accusations just a ploy to distract the public's attention from the real issues?
Other nations have gone down this road of denying some aspect of science that challenged their belief system (See Wikipedia: Deutsche Physik, Lysenkoism). Eventually the citizens of those nations pay a heavy price for that ignorance. That these e-mails are viewed as a 'scandal' is an indicator of the sad state of science education in the U.S. and worldwide.
Subscribe to:
Posts (Atom)
So...What Happened?
Wow. It's been over eight years since I last posted here... When I stepped back in August 2015,...
-
Dr. Jason Sharples has published a paper in ' Progress in Physics ', “Coordinate Transformations and Metric Extension: a Rebuttal t...
-
On March 16, 2009, Dr. Donald Scott, author of “The Electric Sky” (of which I have written about in this blog and on my main site), presente...