Thursday, December 31, 2009

Setterfield & c-Decay: "Data and Creation: The ZPE-Plasma Model"

In this post, long overdue, I'll address Setterfield's paper which he also labels as “Response to Tom Bridgman, part II”, available here

As we'll see, this paper is a very weak response to my complaints.  In the process, Setterfield provides even more examples of why his claims qualify as pseudoscience.

In Section II, under 'The Zero Point Energy and the Redshift', Setterfield again raises the issue of William Tifft's claims of quantized redshifts.  I've covered some of the issues with redshift quantization when discussing John Hartnett's use of this claim (see John Hartnett's Cosmos 1, 2.).  Providing more concrete examples of the problems are planned for a future post.

One of the key ingredients for any science to be reproducible is that other researchers must be able to use any mathematical techniques defined to test and expand on the research.  I'll use the example Setterfield uses in the sections 'The Zero Point Energy and atomic constants' and 'The Atomic age of our galaxy'.

First, take a look at an abbreviated version of equation 9:

c ~ (1+z) = (1+T)/sqrt(1-T^2).

The variable T represents the fraction of time since creation to today, or the scaled time.  T=0 corresponds to today, while T=1 corresponds to the time of creation of the universe in Setterfield's model.   Note that by this equation, the redshift value, z, is zero today and infinite at the time of the creation of the universe.  Okay so far.  This is the one thing I mention in Setterfield G that Setterfield actually fixed, by eliminating the problems created by the trailing '-1' from this equation.

But this doesn't solve the problems I raise later in Setterfield G.

Setterfield mentions that the '~' symbol means 'proportional to'.  To be more definite, it means there is  a simple constant that relates the two quantities.  We'll use the symbol K for this constant, the same symbol Setterfield uses later in equation 13.  So equation 9 becomes

c = K(1+z) = K(1+T)/sqrt(1-T^2)

We can choose our units in something convenient.  For example, if we let our unit of time be one year and the unit of distance be one lightyear, then the speed of light today is (1 lightyear/1 year) = 1.  This has the net effect of casting our speed-of-light measurements into units relative to the speed-of-light today. 

What is the value of K for these units?  That's easy to determine from the data Setterfield has provided.  For today, using the units defined above, c=1 and z=0 so equation 9 becomes

1 = K(1+0)

which is only satisfied if K=1.  Therefore, equation 9 can be written

c = 1+z = (1+T)/sqrt(1-T^2)

Next, we examine Setterfield's equation 10, the lookback time, which is just the distance the light has traveled if emitted at the scaled time, T and received today.  He states that this is the integral of equation 9, but he doesn't tell the complete story.  He writes the equation as

t = K*[arcsin(T) - sqrt(1-T^2) -1]

where K* is yet another constant.  He solves for K* by matching that the total lookback time to creation of the galaxy to have an apparent age of about 12.3 billion years.  With this constraint, he obtains

K* = 4.7846e9 lightyears or years

Using some additional manipulations, which are unclear, he determines the value of K in equation 12

K = 1.780e6

Now look at Setterfield's use of K in equation 13

c* = K(1+z)

which reveals that what Setterfield is calling c* is what we call 'c' above.

But now, according to Setterfield, K is not equal to one!  To get another clue, let's use Setterfield's equation 13 to determine the speed of light today, when z=0.  We get:

c* = 1.780e6(1+0) = 1.780e6 = 1,780,000.0

or, according to Setterfield, the speed of light today is nearly 2 million times faster than the speed of light today???!!!  This is an internal contradiction in Setterfield's theory.  It generates nonsensical values in a location where we have reliable measurements! 

What gives??

The fatal error, which Setterfield has been repeatedly evading, is that in the transition of integrating equation 9 to generate equation 10, the constant 'K' must be equal to 'K*'.  Setterfield never acknowledges this fact and here demonstrates a significant effort to hide it.

This article is clearly not a response to me as Setterfield does not address any of the issues I raised in “Setterfield G”, he merely tries to hide it behind another layer of obfuscation.   This is a 'sleight of math' maneuver that is highly suspicious because it suggests Setterfield knows his claims are in deep trouble and is trying to hide the error.  The alternative is that Setterfield really has no clue what he is doing.  Either way, with the lousy math skills he's demonstrated, I would not want Mr. Setterfield doing my tax accounting!

Here is another example of how, as with many other pseudosciences, if they provide any kind of mathematical model at all, it is a model that is useless for research by anyone else.  This makes it impossible to become 'accepted science'.  In the process of solving problems 'out there' where we have poor measurements, flawed models generate nonsensical results in areas where we have good measurements!

I'll save comments on the "Origin of the Elements" section for a future post.

Friday, December 18, 2009

Setterfield & c-Decay: "Reviewing a Plasma Universe with Zero Point Energy"

Okay, I'm finally getting back to some young-universe creationist issues.  I had been working on these months ago when I was diverted by Don Scott's presentation at GSFC.

Here I'll do a further examination of Barry Setterfield's “Reviewing a Plasma Universe with Zero Point Energy”.  My occasional co-author Jerry Jellison has written some material related to this monograph already (See Critique of Some New Setterfield Material).  I had written an earlier criticism on Setterfield's math (”Setterfield G”) and data selections (”Barry Setterfield joins the Electric Cosmos”, “Setterfield Again...”).  Since I've spent much of the past two years examining Electric Universe (EU) claims, I'll be leveraging much of that background here as I explore some of Setterfield's attempt to integrate his c-decay into an electric universe (EU)/plasma cosmology (PC) framework. 

Section I of Setterfield's work is largely a primer on plasmas in space.  One thing interesting is there is a heavy reliance on sources at the Thunderbolts site which is largely an assembly of EU supporters.  Other large sections of this history seem to be based off material that is also available from Wikipedia (compare to "Wikipedia: Birkeland Current").

Setterfield tries to invalidate mainstream astronomy through many of the EU claims of discoveries of Birkeland currents in space.  Yet all of observations EU (an Setterfield) document as evidence that electric fields can be a significant driver are consistent with mechanisms known back in the 1920s, such as rotating magnetic dipoles of the geomagnetic field, and ionospheric mechanisms forming double layers by gravity gradients, the Pannekoek-Rosseland field (See ”The REAL Electric Universe”).  EU supporters claim these discoveries are evidence of their more outrageous claims such as the Sun powered by external electric currents or galaxies formed by interacting Birkeland currents, much the same way as young-universe creationists will try to leverage archeological discoveries of some city mentioned in the Bible as evidence of their interpretations of Genesis.

Now for an examination of some errors which reveal the poor quality of Setterfield's scholarship:

page 3: Setterfield claims that the critical ionization velocity is the same as the Alfven velocity. “This “critical ionization velocity” was predicted to be in the range of 5 to 50 kilometers per second. In 1961 this prediction was verified in a plasma laboratory, and this cloud velocity is now often called the Alfvén velocity.”  

Incorrect.  Alfven velocity is speed of an Alfven wave (Wikipedia: Alfven Wave).

pages 7: on the formation of double layers, Setterfield claims “In general, the oppositely charged DL are usually maintained their electric potential difference is balanced by a compensating pressure which may have a variety of origins.” 

In the configuration he describes in the preceding paragraph, the 'compensating pressure' is a gravity gradient, the Pannekoek-Rosseland field mentioned above.  In this configuration, the number of high-energy particles generated by the field is small compared to the large number of particles required to establish and maintain the field. 

page 7: quoting Peratt, “the metal-to-hydrogen ratio should be maximum near the center and decrease outwardly”. 

Since the current streams of Peratt's galaxies give the system axial  symmetry, this direction would be radially outward in the galactic disk.  This is inconsistent with observations of galaxies as Population II stars, with low metallicity, occupy the bulge (center) and halo of galaxies, while Population I stars, with high metallicity occupy the disk (and spiral arms).  In reality, the abundance gradient shows high metallicity in the galactic disk, with decreasing metallicity above and below the disk (See Wikipedia: metallicity).

pg 18-19:  The problems here are covered in my post “Setterfield G”

page 23: “Closely linked with the formation of compressed plasma cores of galaxies are the oldest group of stellar objects, the Population II stars.” 

This is wrong for the same reasons as Setterfield's claims on page 7 are wrong.

page 22: “quasars from the earliest epochs with redshifts around z=6.5 or greater, show the same iron abundance as pertains at present.” 

Setterfield's own references do not support his interpretation.  While the Thompson et al. reference discusses the observations, the Kashlinsky reference suggests detection of emission of residual infrared radiation from  Population III stars (formed from the initial hydrogen and helium of the Big Bang) that would produce the needed metals in limited regions.  Models suggest that the lack of elements with Z>2 makes Population III stars much more massive than Population I or II stars.

page 22: “a 50 million degree ignition temperature is easily achieve with a mere 4308.7 eV with no restriction on which elements may be formed.” 

Setterfield suggests this claim comes from pp 105-107 of Don Scott's “The Electric Sky”, but I can't find it. 

If you compute the Coulombic barrier for two protons you get about 125 keV, far higher than the 4.3 keV kinetic energy Setterfield specifies above.  If you start with the simplest, most stable form of matter, protons and electrons (balanced for charge neutrality), the only way you can get this element formation start in classical physics is to use quantum tunneling.  However, we already know this process comes into play at the 17 million degrees corresponding to the temperature at the center of one solar mass of hydrogen and helium, such as the Sun.  Setterfield's element formation mechanism becomes redundant. 

He could avoid the coulomb barrier penetration by starting with neutrons, but then he has only minutes for them to build nuclei before they decay.  Even worse is Setterfield has this mechanism operating in the early universe, on his extremely accelerated atomic time scale while the collision rate between the particles operates on his slower dynamical time scale. 

Without a more definitive reference with details to examine, this piece of information appears to be more the product of wishful thinking than physics.

Exercises for the Reader
* Estimate the height of the coulombic barrier between two protons.  If this were the mean thermal energy of an electron-proton gas, what is its temperature?
Coming up: a look at “Data and Creation: The ZPE-Plasma Model“, AKA “Response to Tom Bridgman, part 2

Update: January 28, 2014: Fixed broken links.

Sunday, December 13, 2009

A Paper Illustrating More of Crothers' Relativity Errors

Dr. Jason Sharples has published a paper in  'Progress in Physics', “Coordinate Transformations and Metric Extension: a Rebuttal to the Relativistic Claims of Stephen J. Crothers” which points out some of the many strange errors that Stephen J. Crothers makes in his somewhat bizarre interpretation of relativity.  I've written some on this topic already (See "Some Preliminary Comments on Crothers' Relativity Claims").

Dr. Sharples exposes Crothers' misstatements in a very pedagogical way, choosing simpler examples, such as 2-dimensional geometry, and applying Crothers' analysis methods.  This technique illustrates that Crothers' claims of 'fatal problems for general relativity' are actually problems in Mr. Crothers' interpretation of general relativity.

For example, Mr. Crothers' likes to claim that General Relativity has an internal contradiction because the metric radius in a Hilbert form of the Schwarzschild metric is not equal to the Gaussian curvature (Wikipedia: Gaussian Curvature) of the metric.  Dr. Sharples uses the simple example of a spherical line element in a Euclidean (flat) 3-dimensional space to illustrate that these quantities are not equal even in this simplified case and are not required to be equal.

From this introductory example, Sharples dives into Crothers' strange arguments about the Schwarzschild solution. 

1) One of the more interesting revelations from Sharples' examination is that Crothers' 'solution' for a spherically-symmetric time-independent system in General Relativity is actually just the Schwarzschild metric, truncated to the region outside the event horizon.

2) Crothers presents the variable, $\alpha$ in his form of the equation, as an arbitrary free parameter.  Crothers never bothers to apply the physical constraint that the metric must generate motions consistent with the Newtonian gravitational solutions.  Once this constraint is applied, Sharples demonstrates that $\alpha = 2m$ = the Schwarzschild radius!

3) Item (2) becomes even more important when Sharples demonstrates that Crothers' solution is simply the traditional Schwarzschild solution mapped in a different coordinate system.  Crothers' infinity of solutions has no physical meaning, just as we can study the Earth in spherical, cylindrical, or cartesian coordinates (whichever is more convenient for the mathematics) with no change in the physical results.

I'd like to thank Dr. Sharples for his work on a very clear and understandable paper.  My own GR is a bit rusty and I would have had to spend some time reviewing relativity before I could have prepared a response to Crothers of this quality.  I am also very pleased that Dr. Sharples hit at the same fundamental areas where I suspected Crothers was off-base, using simpler real-world geometric cases to expose Crothers' misunderstandings as well as applying the real-world constraints in Crothers' Schwarzschild analysis.  They were all on my 'to do list' for a possible response.

Crothers' analysis is seriously flawed.  I wonder how Crothers would make his 'interpretation' of the spherically-symmetric solution consistent with the physics needed to make a reliable GPS receiver (see “Scott Rebuttal. I. GPS & Relativity”).

As an aside, I also find it interesting that Mr. Crothers has become aligned with the Electric Universe (EU) advocates.  Mr. Crothers' understanding of physics seems to rely on some rather bizarre interpretations of mathematics that keep it disconnected with real physical theories.  Yet comparison of mathematical models against observations and/or experiments is a key component of valid science.  If Crothers chooses to dismiss such validation, he is admitting that he is not doing science.

Meanwhile, the EU supporters distrust mathematical models, considering the level of excuses I receive when I've tried to find reproducible details on their Electric Sun models.  EU seems to rely on what could only be described as electrophilic pareidolia (Wikipedia: Pareidolia) in observations, assuming any filamentary glowing structure must be an electric arc.

Crothers is all mathematics with no experiment.
The Electric Universe is all experiment with no mathematics.
How these two ended up working together is a mystery in itself!

Mr. Crothers has apparently prepared a rebuttal to Sharples, but it was rejected (!!) by 'Progress in Physics' (Crothers is on the editorial board of this publication).  I suspect the rebuttal was longer than the new 8-page limit of PiPs new policy.  If Mr. Crothers has this response online, I'll be happy to post a LINK to it.

“Experiment without theory is tinkering.  Theory without experiment is numerology.“
Both are needed for a successful science.

Sunday, December 6, 2009

Pseudoscience & 'ClimateGate'

Yet another diversion from creationism issues, but it is still related to pseudo-astronomy and its tactics.  The issues are still linked because the underlying physics is the same.

Probably the most complete work I've read on the physics, chemistry, and history of climate change is “The Discovery of Global Warming - A History” by Spencer Weart (American Institute of Physics).    But the bottom line on the issue is that the intake and output of every organism alters the chemical composition of its environment, and directly or indirectly, the Earth's climate.  These environmental changes can become so extreme that they prove detrimental for the organism itself.  Humans are just the most recent organisms in the history of the planet to significantly alter the atmosphere.

This video presents the problem as a risk analysis by a high-school science teacher.

Probably the most disturbing thought is a recent publication suggesting that it may already be too late:  Are there basic physical constraints on future anthropogenic emissions of carbon dioxide? by Timothy J. Garrett (21 November 2009).  This researcher analyzed the problem from the point of fundamental thermodynamics.  I suspect there might be a few parameters he missed, but it suggests tight constraints on the problem.

The latest political stunt in the field of climate change has been dubbed “ClimateGate” by some in the media.  You can search for the term to find more of the 'controversy'.  (also see Wikipedia: Climatic Research Unit e-mail hacking incident)

Among the fallout of this 'scandal' are demands that all the data and software for analyzing it be made public.  Some of these people asking for data need to learn how to use Google.

The fact is much of this data already is already public.  NASA has pushed much of the satellite data it has collected into public data archives.  Many are freely accessible online, just like must of the astronomical data NASA has collected.  Over the past decade, there was a political effort to reduce the availability of that data, which takes some time to correct. Here's a link to some mission-specific data: Climate.

Real Climate is also distributing a growing list of links to data used in climate research.  See “RealClimate: Where's the Data”.   I suggest Mr. Horner stock up on a few hundred terabyte disk drives if he really wants this data.  After all, if you want to uncover a 'scandal', you have to go back to the RAW data.  He might want to hire a few (dozen?) programmers with a strong background in numerical methods and scientific data formats, that is, if he really plans to have it re-analyzed.

As for software, here's my short list of the numerous public codes available, mostly oriented towards education.
    •    pyClimate
    •    EdGCM
    •    SourceForge: Climate Model
    •    Java Climate Model
    •    NASA/GSFC Open Source Climate Model

Some of these are from my resource list when I used to work with Earth science data. Many of them show up in reasonably intelligent searches on Google.

For those who think the Sun takes the full blame for the warming trends, here's a one-stop resource for most solar data:  Virtual Solar Observatory.  (I've never understood why claiming the Sun is totally responsible for the current warming trend is regarded as good news by so many.  If it were due to the Sun, then there is virtually nothing we can do about it.  Consider the impact continued warming would have on water availability to food supplies to eventually the entire economy.  If it is the Sun, then as a species, we are so screwed!)

The downside of making too much of code openly available is too many researchers may rely on the exact same algorithm since it is so easily available.  Sometimes this is good, but it can be a bad effect as well.  Multiple independent researchers generally solve computational problems in different ways. They will argue about the techniques used and compare results (this is evident in some of the 'leaked' emails, which seem to be very conveniently edited - or quote-mined). This diversity in coding actually makes it easier to catch errors as erroneous code or algorithms will stand out more easily.  This checking of the algorithms used by others was also a content of the e-mails.  The fact that the different models generate such similar trends suggests that, while not perfect, they give a reliable guide.  Remember, the codes are just as likely to underestimate the severity of some changes as overestimate.

I am not a climate scientist but a bunch of them work “down the hall“ from me.  I know a few others, particularly Bob Grumbine of the More Grumbine Science blog, who gives interesting introductory tutorials on climatology and climate data.  Bob was (is?  My ISP no longer carries USENET) also a regular contributor on Talk.Origins on topics of creationism and Electric Universe claims.

As for some of the comments in the e-mails?  Yes, you always discuss what the detractors may throw at you. Any good scientist, or chess player, tries to plan several moves ahead based on possible responses from their competitors. Science is a competitive endeavor.

I've had discussions with colleagues about journals that appear to have had their editorial boards taken over by creationists and other cranks.  Those discussions would certainly read similar to the 'leaked' emails.  Is that evidence that the creationists and crackpots are correct?

There have been moves for nearly 20 years now for the scientific process to be more open. It is slow, but it is progressing.  There is a move to standardize scientific publications for more reproducibility.  This has only become practical recently with the availability of cheaper and larger methods of data storage to save the many stages of revisions scientific software goes through.  See  The Open Science Project

But science only works when all participants are bound by the same standards and criteria.  This is where pseudo-scientists start making excuses - claiming anything from “God did it” to get away from irreproducibility or declaring a distrust of mathematical models - as their exemption. 

So when are the climate-change deniers going to reveal their models and data? Or are the latest accusations just a ploy to distract the public's attention from the real issues?

Other nations have gone down this road of denying some aspect of science that challenged their belief system (See Wikipedia: Deutsche Physik, Lysenkoism).  Eventually the citizens of those nations pay a heavy price for that ignorance.  That these e-mails are viewed as a 'scandal' is an indicator of the sad state of science education in the U.S. and worldwide.

Monday, November 30, 2009

Charge Separation in Space

One of my readers asked me about this post from August 2004 on the Thunderbolts site: Charge Separation in Space

Here EU is playing games with what is professional astronomers mean by charge separation.  In general, astrophysical plasmas are electrically neutral on large scales.  This graphic indicates what is actually happening in a predominantly neutral plasma.

In this case, the free charges (red and green dots) are sufficiently intermingled that any sample of the volume with a large number of charges will be essentially neutral.  If you examined on a sufficiently small scale, you could probably find small regions (on scales of the Debye length (Wikipedia: Debye Length), which might be charge imbalanced for a short time.

While electrostatic forces will still try to pull ions and electrons together to form neutral atoms, the thermal energy of particles is sufficient to overcome the ionization potential of the atoms.  These particles are always in motion.  Any imbalance in charge will create an electric field which will act to return the plasma back to a neutral state.  If losses due to collisions with neutral atoms or photon emission is too low, the motion can set up an oscillation with a frequency of the plasma frequency (Wikipedia: Plasma Frequency). 

For a discussion of some of the conditions where astronomers know charge separation/electric fields can take place, see my earlier post, “The REAL Electric Universe”.   For the case of the Pannekoek-Rosseland field mentioned in the linked article, the charges are held separated by the gravitational gradient.  In this case, the entire mass of the Sun can only support a charge separation of about 100 coulombs.  This is a very small quantity when compared to an object as large as the Sun.

When astronomers say there is no significant charge separation in space, we are talking about bulk charge separation, contrary to what EU advocates want to claim.  EU advocates usually mean large groups of the same charge (red vs. green) are separated by some distance, like this:

Here, the black arrows represent the direction of the electrostatic forces which will work to pull the separated charges back together.  Also note that the regions of the same charge (electrons and ions) the like-charged particles will be repelling each other! 

Explaining various astrophysical phenomena by these mechanisms requires charge separations and electric fields far larger than can be provided by known mechanims.  The energy to separate the charges (not just ionize the atoms) has to come from somewhere!  Irving Langmuir, (Wikipedia) understood why you cannot get significant, sustained charge separation unless something stronger than the attractive electrostatic force, like your lab equipment, is holding the charges apart.

Another item is that the EU article has interpreted the spectroscopic notation incorrectly.  The number of electrons missing is one less than the value of the roman numeral.  OVIII has one electron remaining (it is called hydrogenic, Wikipedia: Hydrogen-like Atom).  Neutral hydrogen is HI, while ionized hydrogen is HII.  Similarly, neutral helium is HeI, while singly-ionized helium is Hell and doubly-ionized is HeIII.  In some cases, modern notation is creeping in, so some more recent papers use the superscript ionization notation, so HII = $H^{+1}$ and CIV = $C^{+3}$.  An atom with all the electrons missing cannot generate spectral lines by atomic processes.

The issues of spectral line formation were first figured out in the 1920s and 1930s with the development of quantum mechanics.  It's a fairly advanced spectroscopy topic
Astrophysicists have developed software that solves the very complex system of equations, described at the sites above, used to describe atoms, ions and electrons under these conditions.  Two packages I have had some experience with are Chianti and XSTAR.  For a gas of some specified composition of chemical elements and a specified temperature, these programs compute the amount of ionization and intensity of the spectral lines.  These programs are used to determine characteristics of astrophysical environments but are also tested against laboratory experiments.

As usual, the EU crowd is playing fast-n-loose with their 'facts'.

Friday, November 27, 2009

From Cosmic Variance: The Alternative-Science Respectability Checklist

Here is another useful resource I found on the Cosmic Variance blog, a couple of years old, but still relevant for anyone who wants to complain that their revolutionary scientific idea is being ignored by the mainstream scientific community. 
The Alternative-Science Respectability Checklist

The primary points of this checklist:
  1. Acquire basic competency in whatever field of science your discovery belongs to.
  2. Understand, and make a good-faith effort to confront, the fundamental objections to your claims within established science.
  3. Present your discovery in a way that is complete, transparent, and unambiguous.
Consider it a follow-on to “Doin' Astronomy (and Science in General)...

Thursday, November 19, 2009

What Astronomers Didn't Know...

A popular tactic among pseudo-scientists, because they have little evidence actually in favor of their pet theory, is to harp on the 'problems', real or imagined, of the reigning theory with the claim that their theory is the solution to the problems.

Most responses to this argument approach it from the perspective that this is a false dichotomy, that our only choice is Theory A or Theory B.

But in addition to the false dichotomy aspect, this argument also exhibits a gross misunderstanding (or even arrogant ignorance) of how science actually works. 

Every new scientific understanding started with a problem.  This has been true for the past four hundred years of scientific history.

Consider this snapshot from history:

In 1912, if you asked the following questions, astronomers (and physicists) would have no answer, though they would have lots of speculation.
  • Why do atoms emit spectral lines?
  • What is the cause of radioactivity?
  • Why do atoms bind to form molecules?
  • Why does beta decay violate conservation of energy?
  • What causes the anomalous perihelion precession of the planet Mercury?  Despite numerous searches, no planet has been found to account for this.
  • Where are the elements nebulium and coronium in the periodic table?
  • What causes the unidentified spectral lines (the Pickering Series) of the star zeta Puppis?
  • What energy source powers the Sun and other stars?
  • Why are white dwarf stars so small and faint yet can be more massive than the the Sun?
  • No one has yet demonstrated that planets orbit due to gravity.  No one has been able to do the “Newton's Cannon” experiment.
  • The Moon is nearby, yet we do not understand its motion (See "The Problem of the Moon's Motion")
Yet 50 years later, all of them would be answered and we would actually have products based on the physical understanding that was part of solving the original questions.  The solution to many of these problems were, in fact, related.  I document some of these in my paper “The Cosmos in Your Pocket: How Cosmological Science Became Earth Technology”.  Of course, the solution of these problems would also refine our instruments and our understanding to the point that we could measure to higher precision and uncover new issues at the next level of detail.

When confronted with these earlier, now solved problems, pseudo-scientists will often try to claim that these earlier problems were not as big (how do you define the 'bigness' of a scientific problem?), or nowhere near as controversial, or not as difficult to solve as the scientific problems of the present.  Some go so far as to claim they were never really problems at all.

And yet, many of these problems would take at least a decade and sometimes more, to be solved by the best minds of the day.

Tuesday, November 3, 2009

"Electric Sun Verified"?? - In your dreams...

Shortly after I had prepared the response above, I received news of the 'official' spin being placed on the IBEX skymap announcement (See IBEX Results cause even more problems for the Electric Sun model):

Electric Sun Verified by Wal Thornhill

Not surprisingly, Thornhill claims the 'ribbon' seen by IBEX fits the 'Electric Stars' model perfectly.  This mission has been flying for six months yet NOWHERE do we find the EU 'prediction' of the IBEX skymaps, particularly not with any estimates of the fluxes that the instrument would detect.  Why doesn't Mr. Thornhill demonstrate his computation of the neutral atom fluxes?  If his model actually works, then this should be a straightforward step.  The IBEX data are freely available at the IBEX data archive.

First, let's take a look at some of Thornhill's statements.
Thornhill: “IBEX has discovered that the heliosheath is dominated not by the Sun but by the Galaxy’s magnetic field.” 
No. IBEX discovered that the energetic neutral atom flux is dominated by the galactic magnetic field.  The heliosheath itself would not exist if not for the outflowing solar wind.
Thornhill: “Comets are an electrical phenomenon where the comet nucleus is a negative cathode in the Sun’s plasma discharge. Examples of cometary stars are uncommon because stars are normally a positive anode in the galactic discharge.”
Laboratory cathodes and anodes form part of a complete circuit.  Where is the return circuit between the Sun and the comet?  If we see the comet, why don't we see the return path of the particles?  In the lab, the return circuit corresponds to the wires connecting the discharge tube to the power source.  And just where is the battery or generator that keeps the system energized?
Thornhill: The “open” helical magnetic fields discovered high above the Sun’s poles by the Ulysses spacecraft are supportive of Alfvén’s stellar circuit model.
“Open” field lines are lines that don't connect back to the source of the field.  This means the 'open' field lines cannot form a complete circuit, contrary to Thornhill's claim.  Thornhill's invocation of 'open' magnetic field lines puts him in contradiction with Don Scott who claims that there is no such thing as an 'open' magnetic field  (See Scott Rebuttal. IV.  'Open' Magnetic Fields).
Thornhill: Given the detail in this model we should expect, as more data comes in, that
researchers may find in the ENA “ribbon,” bright spots, filamentary structures,
and movement of the bright spots consistent with rotation of Birkeland current
filament pairs and their possible coalescence.
This is a pretty weak prediction.  Tabloid psychics can do this.  Features that could be 'bright spots' are already visible in the IBEX map.

Now consider Thornhill's quote of Alfvén:
In 1984 Alfvén predicted from his circuit model of the Sun there are two
double layers, one connected to each pole at some unknown distance from the
Sun or heliosphere. He wrote, “As neither double layer nor circuit can be
derived from magnetofluid models of a plasma, such models are useless for
treating energy transfer by means of double layers. They must be replaced by
particle models and circuit theory... Application to the heliospheric current
systems leads to the prediction of two double layers on the sun's axis which
may give radiations detectable from Earth. Double layers in space should be
classified as a new type of celestial object.” — H. Alfvén, Double Layers and
Circuits in Astrophysics, IEEE Transactions On Plasma Science, Vol. PS-14, No.
6, December 1986.
But Alfvén's 'circuit model of the Sun' is NOT the same as EU's Electric Sun model, for Alfvén was not suggesting that his circuit mechanism was the source of solar luminosity.  Alfvén described it as a possible mechanism for heliospheric plasma flows.  Alfvén fully understood that stars were powered by nuclear energy and that stellar astrophysics had a major role in the study of laboratory plasmas.  Consider this quote by Alfvén from his paper “Cosmical Electrodynamics”  H. Alfven. Cosmical Electrodynamics. American Journal of Physics, 28:613–618, October 1960. doi: 10.1119/1.1935919.
“Even if Birkeland's experiments were as good as could be made in his time, he could not produce a high-temperature dense plasma, and it is only by studying this state of matter that we really can draw certain conclusions about cosmical phenomena.  This technique has not been available until the last few years and is a result of the so-called thermonuclear research.  This research got its start from astrophysics - as is illustrated not only by the term “Stellarator”, but also by the name of Spitzer - and it still gets much inspiration from cosmical electrodynamics.'”
Clearly Alfvén was NOT a supporter of Electric Sun claims.  Why do the Electric Universe supporters insist on implying that he was?

But the real gem of the EU article is the graphic about halfway down the page titled “The Sun's Environment”.  Here's just a few of the issues and questions this model raises.
  • Central z-pinch current column is single current.  This configuration has the same problems as the 'Solar Resistor Model' discussed before such as the instability of this configuration along with the fact that any current sufficient to explain solar luminosity creates a magnetic field far stronger than observed.  In the Thunderbolts thread, they seem to deny that they use this configuration, instead favoring the 'spherical capacitor configuration'. 
  • This z-pinch current column is inconsistent with the description in “The Electric Sky”, page 112, Figure 21, which has two currents directed into the Sun from the north and south poles. 
  • Thornhill's interstellar magnetic field is in the vertical direction in this graphic.  This is inconsistent with field produced by the z-pinch currents which are directed around the current.  Clearly, someone at EU forgot the rules for magnetic field formation by currents.
  • So what creates the 'vertical' interstellar magnetic field?  This field must be much stronger than field created by z-pinch in order to keep the z-pinch stable.  All values predicted for this configuration are far larger than any measured values.  If the claim is this field is generated by the intergalactic current streams, the direction is still inconsistent.
  • Consider the disk of charged particles from Sun.  Is this protons AND electrons from Sun?  If so, this is radically different from outward proton flow and in the inward electron flow of  'spherical capacitor model' described in some EU forums and "The Electric Sky".
  • What confines the 'disk of charged particles from the Sun' to a disk structure?
  • What holds the 'double layers' in place?  All known laboratory & astrophysical double layers are 'anchored' to some structure (in astrophysical cases, this is often by gravitational stratification, see "The Real Electric Universe" ) so the double layer does not collapse due to the attraction of its own opposite charges.  Without this 'anchor', the double layer collapses on a timescale on the order of the inverse of the plasma frequency.
  • If all of these z-pinch currents powering stars are from filamentation of a galaxy-forming current stream, wouldn't this preferentially align the northern & southern magnetic poles of stars in the galaxy with the galactic spin axis?
  • If the stellar magnetic field is driven by these external currents, how can this mechanism explain the 11-year cycle of the solar magnetic reversals?  Do the galactic currents periodically change direction (perhaps they are A/C?).  If that is the case, wouldn't all stars in a given galaxy exhibit the same magnetic cycles (in period if not necessarily in phase)?
  • The current streams depicted in Thornhill's model should be strong emitters of synchrotron radiation.  No radio skymap sees these structures from nearby stars dominating the general structure created by the galactic magnetic field.
  • In the model defined by Thornhill's graphic, in which direction is the Sun moving to explain the IBEX observation?  That is, in what direction is the solar apex?
Like many of the other EU models, this current configuration can be plugged into Maxwell's equations to determine:
  1. Electric & magnetic field strengths to compare with measured field values
  2. Forces the E & B field configurations will produce on charged particles at any point.  This will tell you if the configuration is stable, or if unstable, on what time scale it will disintegrate.
  3. Effective charge densities created by differences in speeds of ions and free electrons.
Has any Electric Universe advocate done this?  I suspect not.

I suspect Mr. Thornhill doesn't understand the IBEX data projection (which is either Aitoff or Hammer), since the imprint of his 'disk' on the sky, or even of a current stream crossing a disk, is inconsistent with the shape of the IBEX ribbon structure (it is actually much more consistent with the shape examined by Schwadron, mentioned before).   It also creates problems for his interpretation of the 'hot spot' in the apparent direction of the heliotail being one of his 'double layers' that powers the solar z-pinch.

But the real demonstration that Thornhill does not understand what he is talking about is revealed in this quote:
Thornhill: Already there has been a report of an unexplained high-energy cosmic ray “hot spot” roughly in the direction of the inferred “heliotail.” The energies of the cosmic rays are in the range possible by acceleration in a galactic double layer (Carlqvist). Confirmation may soon come from observations of high-energy cosmic-ray electrons. The electrons undergo synchrotron and inverse Compton scattering losses and thus cannot travel very far from their sources, which makes them sensitive probes of nearby galactic sources and propagation.
First, the last sentence in the quote above is almost an exact quote from the report, though Thornhill did not note it as such, making it look like this statement is his thoughts.  Second, he notes that the electrons cannot travel far from their sources due to "synchrotron and inverse Compton scattering losses".  Thornhill doesn't understand that these very same processes will act on his star-powering z-pinch!!  What does it say about how far these currents can propagate???

I'll give Mr. Thornhill a few months to assemble and publish his detailed results answering these questions.  I may include the implications of this new EU model as part of my presentation at the American Astronomical Society this coming January.

I also welcome constructive comments to address these questions.

Thursday, October 29, 2009

IBEX Results Cause Even More Problems for Electric Sun Model

The Mission and the Results
NASA recently released a 'first light' total skymap of data collected by the Interstellar Boundary Explorer (IBEX) mission (Wikipedia: IBEX).
NASA Release: Giant Ribbon Discovered at the Edge of the Solar System
IBEX News: IBEX Explores Galactic Frontier, Releases First-Ever All-Sky Map

IBEX measures the flux of high-energy, i.e. energies of kilo-electron volts,  NEUTRAL atoms that propagate to the inner solar system from regions at or near the heliopause (Wikipedia: Electron Volt, Wikipedia: Heliosphere).

Figure 1: One energy band of the IBEX all-sky map, corresponding to atoms with an energy of around 1,100 electron volts.  The different colors correspond to different counts of atoms from the different directions in the sky (note the color bar at bottom).  The 'nose' of the heliosphere, the direction of the Sun's motion relative to the local interstellar medium, corresponds to the center of the map.  This map, A, is the entire sky projected into 2-dimensions, using an Aitoff projection (Wikipedia: Aitoff Projection).  The inset, B, reveals bright knots in the ribbon.  Credit: SwRI.  More images from the NASA press release.

IBEX sees a clear enhancement in the direction of the Sun's motion through the interstellar medium (ISM).  In the skymap, this flux appears as blue, corresponding to a value of about 100 particles per square centimeter/second/steradian/keV.  This enhancement was expected by the standard model of the Sun's magnetic field interacting with the interstellar medium, much the same way as the Earth's magnetic field interacts with the solar wind to form the magnetosphere (Wikipedia: Magnetosphere). 

The unexpected result is the 'ribbon'-like structure seen to stretch across the maps.  This enhancement corresponds to a particle flux up to three times higher than the regular flux in the Sun's direction of motion.

This feature, which appears on a full sky survey, was undetected by the two Voyager spacecraft which just happened to pass to each side of the 'ribbon'.  This illustrates the limitations of in situ measurements when compared to large-scale, surveys.   This is why real science tries to incorporate both types of measurements. 

The press releases emphasizes the surprising aspects of the result:
"This is a shocking new result," says IBEX principal investigator Dave McComas of the Southwest Research Institute. "We had no idea this ribbon existed--or what has created it. Our previous ideas about the outer heliosphere are going to have to be revised."
"We're missing some fundamental aspect of the interaction between the heliosphere and the rest of the galaxy. Theorists are working like crazy to figure this out."

But the problem for heliospheric physics not as big as the enthusiastic wording of a NASA press release might suggest.  Such phrasing is popular among public affairs offices as it promotes science as an exciting field, but such phrasing is also fodder for crank science groups, who always like to claim their model predicted the result all along.  More on this aspect below.

The region of the ribbon on the sky agrees well with predictions of existing 3-D heliospheric models, if there is an additional interaction of the current, induced in the plasma by the interstellar magnetic field, with the magnetic field itself.  This is called the JxB current (see “Comparison of Interstellar Boundary Explorer Observations with 3D Global Heliospheric Models”, Schwandron et al.).  That the inclusion of the JxB term (which has units of a pressure) exhibits such excellent agreement with the intensity profile of the ribbon suggests there is an additional particle interaction at play which has not been included in standard heliospheric models.  This is possibly another indicator that magnetospheric MHD models have reached the limits of their ability to generate robust predictions in the era of modern instruments and more effort should be directed in developing models which more accurately treat the particle kinetics (see "Hybrid Simulation Codes: Past, Present and Future - A Tutorial").

IBEX Implications for Electric Sun Models
There was some early reaction from the Electric Universe forum at Thunderbolts Forum (link).  (For those newcomers interested in other problems with Electric Sun models I've explored, see The Electric Sky: Short-Circuited, Electric Cosmos: The Solar Resistor Model, Electric Cosmos: The Solar Capacitor Model I, II, III, and entries under the tag "Electric Universe" on this blog.)

As usual, the comments from the EU fan club are scientifically useless, resembling a form of electrophilic pareidolia (Wikipedia: pareidolia).  Everything looks like some type of current to them.

So let's go into the reasons why the IBEX result creates more problems for the EU claims that the Sun is powered by external electrical energy.

1) IBEX reports a flux of Sunward-bound NEUTRAL atoms.  Not charged atoms, and not electrons (as required by many Electric Sun models).  In the forty years that humans have sent satellites ranging between the orbit of Mercury to the termination shock, no space plasma detector has detected Sunward-bound electrons at anywhere near the flux and energies needed to provide significant power the Sun by this method.

2) This result indicates the heliopause is not a uniform region on fairly large scales.  So how can a flux of electrons from the heliopause create a sun that radiates so uniformly?  Consider the three possibilities:
  • Electrons on direct radial paths from the heliopause to the photosphere would imprint this 'ribbon' structure on the phosphere.  The flux difference is a factor of two or three, so we would see an 'imprint' of this feature on the photosphere.  Why don't we see this enhancement in solar brightness?
  • Suppose the electrons have their directions slightly randomized during their infall, so they hit the photosphere in a uniform flow, washing out the imprint of the IBEX 'ribbon'.  In that case, sunward electron fluxes should be about the same from any direction centered on the Sun.  This means the measured electrons fluxes from spacecraft would be consistent with values mentioned in my analysis of the "Solar Capacitor Model" (see 1, 2, 3).
  • The infalling electron flow, through randomization and electromagnetic focusing,  gets confined to streamers.  The popular plasma lamp configuration (Wikipedia: Plasma Lamp) illustrates how such energetic streamers will create localized heating regions on surface, again, contrary to our observations of the Sun.
The only way the Sun could receive its energy from external particle flows, electrons or otherwise, is if electrons have some physics-defying properties yet to be detected in laboratories in our 100+ years of experience with them in scientific and engineering applications.

An 'Official Word' from the Electric Universe Priesthood?
Shortly after I had prepared the response above, I received news of the 'official' spin being placed on the IBEX results by advocates of the 'Electric Universe': "Electric Sun Verified" by Wal Thornhill

Thornhill claims the 'ribbon' seen by IBEX fits the 'Electric Stars' model perfectly?  He includes several 'predictions' that are about as insightful and precise as a tabloid psychic.

As part of this article, Thornhill included an interesting graphic about halfway down the page titled “The Sun's Environment”.  Close examination of this graphic and the accompanying text reveals that it is yet another Electric Sun model, with a some features, or at least ambiguities, substantially different than those presented in the Electric Sun models of “The Electric Sky” and other EU resources. 

Of course, if EU was really doing science, they would apply Maxwell's equations to the current systems they propose and see what happens for themselves.  But that might be expecting too much of them.  Plus, it's much more entertaining to demonstrate how they don't understand the basics of the physics in which they claim expertise!

More to come...

Thanks to Nathan Schwadron (Boston U & Southwest Research Institute) for clarifying some of my questions in interpreting their paper.

Tuesday, October 13, 2009

Scott Rebuttal. IV. 'Open' magnetic field lines

Here is another entry in my response to Dr. Scott's 'rebuttal'.

“Open” magnetic field lines is another concept that Dr. Scott condemns (”The Electric Sky”, page 118; “D.E. Scott Rebuts T. Bridgman: Open Magnetic Field Lines'', pg 11), but like so many other of his claims, he is, at best, playing semantic games.  In principle, magnetic and electric field lines can extend to infinity, however, in most cases we wish to examine, we don't want or need to consider the behavior at infinity.  Is Dr. Scott saying that any time you want to visualize something with a magnetic field, you must represent the entire universe? 

In any real analysis, we have to draw the boundary somewhere.  This can leave field lines cut-off.  Particles can still flow along these lines.  In general, they will connect to field lines from another field of a more distant source.  In the case of magnetic dipole fields, these 'open' lines generally occur near the poles.  If Dr. Scott claims these lines don't exist, is he claiming that charged particles cannot travel out from these regions?  Where do the charged particles go?

The recognition that magnetic field lines can never end is acknowledged by many researchers by enclosing the term 'open' in quotes.  I will use that convention here.

Dr. Scott's obsession with 'open' field lines also reveals an hypocrisy on his part.  It's as if he wants to treat them as 'real', in need of being drawn 'complete'.  But Dr. Scott's claim gets even stranger when we begin to explore his justification that 'open' field lines violate Maxwell's equations [Scott 2007].   Specifically, Scott claims that field lines must be closed to satisfy the Maxwell Equation,

But magnetic field lines are not 'real', they are representations of a vector field designed as mere guides to directions of charged particle flow, representing the direction of the vector field at that point.  When we draw a field line as complete between two points, say A and B,  (See Figure 1 at below) we are saying that we expect the particles moving outward from point A will eventually arrive at point B.  If a field line is 'open', such as for point C, we don't expect the particle to ever return to the original system, such as point D, but connect to magnetic fields from other more distant sources.  Dr. Scott's insistence on closed magnetic field lines makes such systems of plasmas permanently CLOSED to any kind of charged particle transfer.

Figure 1: An exploration of closed and 'open' field lines.  The red field lines close back on the source object for the field.  The blue field lines are 'open', connecting to more distant field sources which we don't show in this graphic.

The Maxwell equation,

, is true everywhere in a magnetic field.  This is a consequence of what is called the Divergence Theorem and is related to the observational evidence that there are no (known) magnetic monopoles (source points of magnetic flux) and it means that the net magnetic flux passing through a closed surface is zero: the same amount of magnetic flux passes into the surface as passes out of it.  This condition demands no other constraint!

In fact, the simplest set of 'open' field lines, a constant magnetic field, where the field lines extend from points at 'negative infinite distance' to 'positive infinite distance', trivially satisfies  the divergence condition.  Dr. Scott's claim that 'open' field lines violate the divergence condition for magnetic fields is false by trivial counter example!  An undergraduate physics major taking a year course in electromagnetism should immediately recognize Dr. Scott's statement as a gross error.

The really embarassing part is that Dr. Scott managed to get this fundamentally erroneous claim into the IEEE Transactions on Plasma Science, a journal that is supposed to be peer-reviewed!  This wasn't an obscure paragraph in the paper.  A large section of the paper was devoted to arguing this statement.  The journal allowed a blooper like this through?  What does that say about the quality of the peer-review process for this journal?

But let's cut Dr. Scott some slack.  Maybe Dr. Scott relying on the divergence condition was actually a typo (a really big typo).  Perhaps Dr. Scott really meant to invoke the other Maxwell equation related to magnetic field structure, Ampere's Law:
The only problem with this is that the applicable vector identity is Stokes Theorem, so
This mathematical relationship is true for any arbitrary closed path in the magnetic field, such as path EFGH, or even IJKL, in Figure 1.  It is not just paths representing magnetic field 'lines'!  What we define as magnetic field lines are just lines defined along a path parallel to the local vector field, so that the line segment, dl, is parallel to the magnetic vector, B

There is no requirement that the path be closed when defining magnetic field lines, only that if the path is closed, by Stoke's Theorem, it will specify some characteristic of the current and electric field passing through the surface enclosed by the path.

While drawing field lines as closed loops will guarantee that
insistence on 'close' field lines appears to be a human convention not tied to any physical requirement so long as the magnetic field lines never begin or end (Wikipedia: Magnetic Fields). 

As an additional check, I've examined a number of papers written in the past 100 years on the development of electromagnetism and on magnetic fields and field lines, many written by leaders in the field, including works by Alfven, Vasyliunas, Stern, Swann, and Falthammar, and have found no support for Scott's statements that field lines cannot be 'open'.  Many of these researchers use the concept themselves.  If Dr. Scott wishes to continue making this particular claim, he needs to provide more documentation than "Don Scott Says So".  Professionals with a stronger background in electromagnetism than Dr. Scott (or me), disagree with him.

Update: January 4, 2015: W.D. Clinger on the International Skeptics Forum has pointed out a nuance in this argument of how magnetic field lines can end at magnetic null points (i.e. reconnection sites).

Contrary to Dr. Scott's claims in “The Electric Sky”, 'Open' field lines do not violate Maxwell's Equations!

[1] D. E. Scott. Real Properties of Electromagnetic Fields and Plasma in the Cosmos. IEEE Transactions on Plasma Science, 35:822–827, August 2007. doi: 10.1109/TPS.2007.895424.

Wednesday, September 30, 2009

Public Presentation at National Capitol Area Skeptics

I'm scheduled to do a presentation for the National Capitol Area Skeptics based on my paper "The Cosmos in Your Pocket" .

The talk is scheduled for October 10, 2009 at 1:30PM at the Bethesda Library, Bethesda, Maryland and is open to the public.

Don't worry, it will be abbreviated to one hour duration, not the two hour marathon session at DragonCon.

Sunday, September 27, 2009

Scott Rebuttal. III. The Importance of Quantum Mechanics

I make comments on Dr. Scott's lack of mention of QM:
Dr. Scott, an electrical engineer, is clearly a victim of this professional isolation himself. I found little mention of quantum mechanics or its impact in astronomical observations and astrophysical understanding and the feedback astrophysics provided to Earth laboratories. Considering that the quantum mechanics that explains the spectra and energy source of the stars is the same quantum mechanics that has made modern microelectronics possible, I suspect Dr. Scott probably has some interesting misconceptions about this subspecialty of his own field. (”The Electric Sky: Short-Circuited”, pg 8)
Dr. Scott has a rather bizarre response
A discussion of quantum mechanics has no place in my book. I intentionally do not discuss the very many subspecialties of electrical engineering. That was not the thrust of my book and I submit comments such as the one above are simply 'red-herrings‘ dragged across the path of that thrust. (”D.E. Scott Rebuts T. Bridgman”, pg 4).
But the quantum mechanics that explains atomic structure and spectra is the exact same quantum mechanics that made modern semiconductor electronics possible!  Does Dr. Scott know this?

One of the few references I provide that Dr. Scott did apparently examine is my paper “The Cosmos in Your Pocket: How Cosmological Science became Earth Technology” (Version 1 was available at that time). Dr. Scott reinforces the appearance of his misunderstanding of quantum mechanics on page 2 of his rebuttal:
At any rate, in one swoop, TB attempts to subsume all of the practical achievements of modern chemistry, solid-state physics, and electronics into owing their origins to astrophysics. This is absurd on its face. If he thinks that Leo Esaki, working for (what is now) Sony Corporation in 1957, had any thoughts about astrophysics in his mind while developing his tunnel diode, I submit he is delusional. What about Brattain, Bardeen, and Shockley while working at Bell Laboratories on their bipolar junction transistor – or the field-effect transistor? Were they thinking about astrophysics too? I very much doubt it. (D. E. Scott Rebuts T. Bridgman, pg 2)
Dr. Scott misses again. My point is the these individuals used the exact same quantum mechanics as Bethe, Teller, Gamow, and others used in solving problems in nuclear astrophysics. These successes in astrophysics also demonstrated the broad range of applicability of quantum mechanics, reinforcing both astrophysics and quantum mechanics as well as the concept that the physical laws we measure on Earth apply in the distant cosmos as well.

But to fully appreciate the quantum connection between astrophysics and modern electronics, one needs to examine some of the history.

These two papers, by Alan Herries Wilson from 1931, are regarded by many as the papers that put semiconductors on a firm theoretical foundation.
These two papers turned semiconductors from a mysterious material (its primary use in electronics was the 'crystal' in crystal radio sets) to a substance that could be understood, and subsequently manipulated, on a fundamental level. John Bardeen and others regard them as the classic papers in semiconductor electronics (see Oral History Transcript — John Bardeen).

While I was originally planning a different approach for this article, another search of the literature revealed a far more interesting connection between semiconductor electronics and astrophysics. Consider this paper, published by the same Alan Herries Wilson earlier in 1931.
This was one of the first papers to use quantum mechanics to in an attempt to understand the nuclear processes that could take place in stellar interiors.

This is a fascinating paper, as it would examine the impact of quantum tunneling on nuclear reaction rates in stars. It did not solve any significant astrophysical problems, for it was a little before it's time, but it outlined the quantum mechanical analyses that later researchers would use. Wilson could not solve the problem because he did not know about the neutron (which would be discovered the following year) nor did he have a way to include the newly-hypothesized neutrino into the reaction computations. A theory of neutrino interactions would not be available until some work by Enrico Fermi a few years later. Both of these discoveries were important to solve the problem of stellar nuclear reactions. It would be another eight years before Hans Bethe would solve the main bottleneck in the formation of hydrogen from helium [Bethe 1939]. Bethe's work would not fully solve the problem until after World War II when the uncertainties in stellar compositions would be resolved.

Even more interesting is the fact that Wilson would note how astrophysicists recognized the implications of the Pauli Exclusion Principle (Wikipedia: Pauli Principle) (the fact that no two electrons, or more generally, identical fermions, can occupy the same quantum state at the same time) before the physicists[Wilson 1980]. Astrophysicists immediately made use of this principle. Ralph Fowler (who was one of Wilson's professors) had already used the Pauli Principle for computing the structure of condensed matter at high densities in the interior of stars [Fowler 1926]. This paper would lay the foundations of later research on degenerate states of matter matter (Wikipedia: Degenerate Matter), a critical development in understanding the structure of white dwarf (Wikipedia: White Dwarf) and neutron stars (Wikipedia: Neutron Stars).

Wilson applied the exact same quantum mechanics in both astrophysics and the development of semiconductor theory. And Wilson wasn't the only physicist to do this. Exploring the publications of Hans Bethe (Nobel Prize, 1967), Edward Teller and others reveal contributions in our understanding of atoms and molecules as well as astrophysics - via the universal application of quantum mechanics.

This only reinforces my original claim, as the great majority of what we know about astrophysical plasmas is determined from their spectra, which can only be understood with quantum mechanics. This is the same quantum mechanics used in understanding technologies from semiconductor electronics to laser emission.

Prior to the discovery of astrophysical spectra, the only thing we knew about the cosmos were positions (mostly 2-D, but 3-D if sufficiently close), colors, motions and variability if we collect the data over time.

Spectra changed all that. With spectra, and the quantum mechanical framework which describes their formation, we can determine:
  • Compositions, ionization levels and temperatures: from lines and intensities of lines
  • Pressure and temperature: from the profile of the spectral lines
  • Radial velocities and intense gravitational fields: from spectral line shifts
  • Electric and magnetic fields: from line splitting, the Zeeman (Wikipedia: Zeeman Effect) and Stark effects (Wikipedia: Stark Effect).
Even on the Earth, quantum mechanics is needed for plasma diagnostics as many high-temerature plasmas of industrial and commercial interest are hot enough to destroy material probes inserted in them. We therefore measure their spectra and use quantum mechanics to determine the temperature and ionization levels.

The quantum mechanics that makes our semiconductor electronics in our homes possible, and enhances our understanding of laboratory plasmas, is the same quantum mechanics that explains the spectra of stars as well as their nuclear energy source.

Did Dr. Scott even bother to check his 'facts' before writing his 'rebuttal'?  Is he attempting to evade acknowledging the role of quantum mechanics in his own field of electrical engineering, as well as in the fields of plasma physics and astrophysics?  Does Dr. Scott understand the role that the concept of Fermi energy plays in semiconductor electronics as well as white dwarf and neutron stars?

  1. J. B. Hearnshaw. The analysis of starlight: One hundred and fifty years of astronomical spectroscopy. Cambridge and New York, Cambridge University Press, 1986.
  2. R. H. Fowler. On dense matter. Monthly Notices of the Royal Astronomical Society, 87:114–122, December 1926.
  3. A. H. Wilson. Solid state physics 1925-1933: opportunities missed and opportunities seized. Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, 371: 39–48, June 1980.
  4. H. A. Bethe and C. L. Critchfield. The formation of deuterons by proton combination. Physical Review, 54: 248–254, August 1938.

Thursday, September 17, 2009

Still no electric currents powering the galaxies...

The ESA Planck mission has released some first images of the cosmic microwave background.

Planck first light yields promising results

Still no sign of those giant galaxy-powering electric currents so promoted by the Electric Universe crowd...
See "Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background"

Friday, September 11, 2009

DragonCon 2009 Report

I had hoped to post an update while I was at DragonCon but having the family along made that a bit more difficult than expected. I return to find a bunch of unmoderated comments (now published, thanks for your patience).

I gave my talk, “The Cosmos In Your Pocket”, based on my paper, on late Saturday afternoon. The DragonCon science organizers allocated me a full two-and-a-half hours and not only did I use almost the full allotment of time, but actually had many attendees stay for the duration.

I attended a number of sessions in the science, space, and skepticism programming tracks including Kevin Grazier's “The Science of Battlestar Galactica”, Phil Plait's and Kevin's “Myths in the Movies” session, and the “Stealth Science and Skeptical Thought” panel (see image below) with Phil, Adam Savage (Mythbusters), Scott Sigler (science fiction author), Rebecca Watson (Skeptics Guide to the Universe) and Melissa Kaercher (inker for science-related comic books).

Seeing again
  • Phil Plait of Bad Astronomy & JREF: Phil and I have crossed paths briefly a number of times, starting back around 1996 (or was it 1997?) when I invited him to give an Astronomy Day presentation for the Greenbelt Astronomy Club. We also crossed paths with the 2004 Venus Transit. He knows my face but can't quite remember my name. He did sign my copy of his new book, “Death from the Skys!”.
  • Eugenie Scott of NCSE: She spoke at the Goddard Scientific Colloquium in the fall of 2006 where I joined the speaker's luncheon.
Meeting for the first time
It apparently wouldn't be DragonCon without a report of Celebrity Sightings
All in all, it was a good trip for a first-time attending DragonCon.

The only downside was that
registration lines seem to be a persistent issue, even for those who pre-registered. The line was so long Thursday evening that I gave up after about 50 minutes. Friday morning I managed to make it through in just under two hours. The only break in the boredom of waiting in line was someone sent a beachball flying around the room and a free-form volleyball game broke out which was terminated when too many participants were hitting the large chandeliers in the ballroom. While final lines were organized alphabetically, the A-B-C lines were full while later letters in the alphabet had nearly empty chutes. Random clumping or bad planning? Only the organizers know for sure...

Sunday, September 6, 2009

Theory Vs. Experiment. II

Most of the science we know was discovered based on the mismatch between what we thought we knew and experimental/observational results. There are a host of historical discoveries that were precipitated by the discovery of such discrepancies. But as you can see in these examples, the time between the discovery of a problem and its resolution can be years, even decades.

Here's a list of discoveries which started with discrepancies relevant to 'missing mass' which existed at one time and have been resolved.

Discrepancies found in the orbit of the planet Uranus (discovered 1821):
Hypotheses: breakdown of Newtonian gravity
Undiscovered planets
Resolution: Planet Neptune discovered, 1846 (See Wikipedia: The Discovery of Neptune)
Time to resolution: 25 years

Discrepancies are found in the proper motions of the relatively nearby stars Sirius and Procyon (1844)
Hypotheses: Something massive but very faint, that did not emit enough light to be seen in the glare of the primary star, was orbiting these stars.
Resolution: Faint companion stars are found orbiting Sirius (1862) and Procyon (1896). These stars would turn out to be white dwarf stars. (see Wikipedia: White Dwarf)
Time to resolution: 18 and 52 years

Discrepancies found in the orbit of the planet Mercury (discovered 1859)
Hypotheses: Undiscovered planet between Sun and orbit of Mercury. Proposed name is Vulcan but repeated searches do not find it.
Resolution: Postulation of the General Theory of relativity, 1915 (See Wikipedia: Perihelion Precession of Mercury)
Time to resolution: 56 years

Beta-decay of atomic nuclei is found to violate conservation of energy and angular momentum (discovered 1911)
  • Beta-decay violates these conservation laws
  • there is an extra particle, electrically neutral, spin 1/2, very small mass, emitted in beta-decay that is not detected by current technologies (neutrino hypothesis, 1930)
Resolution: Neutrino detected, 1956 (See Wikipedia: Neutrino)
Time to resolution: 45 years

Atoms with the same nuclear charge are found to have different atomic masses (discovered 1913) (See Wikipedia: Isotopes). The mass of atomic nucleus for many elements is about twice the number of protons.
Hypothesis: tightly bound states of electrons and protons make up the difference in mass
Resolution: Discovery of neutron, 1932 (See Wikipedia: Neutron)
Time to resolution: 19 years

Shortage of neutrinos emitted from the Sun (discovered 1968).
Resolution: Neutrino oscillations, 2003 (See Wikipedia: Neutrino Oscillations)
Time to resolution: 35 years

What many people forget is that in the years between discovery of the problem and the resolution, there was often much contention between scientists. In a number of cases, there were experiments performed which reinforced some hypotheses.

In the case of the neutrino, theories of its interaction were developed which allowed theorists to treat it as a real particle and make numerical predictions. This capability also played a role in the eventual discovery as it enabled researchers to better estimate what level of technology would be needed for a direct detection.

Here's the big discrepancy in astronomy that has yet to be resolved. The is the focus of current controversy

Discrepancies: Rotation curves of galaxies doesn't match the visible matter distribution (discovered 1933). Clusters of galaxies have galaxies moving too fast to be gravitationally bound.

For information on the current state of searches for various particles beyond the Standard Model, check out the reviews at Particle Data Group, 2009 Reviews, Tables, and Plots

Frankly, I think the undiscovered subatomic particle option is most likely. It has the advantage of being the simplest solution that does not violate constraints from other observations. One could make the point that there seems to be an interesting hierarchy in the family of particles related to what interactions different classes of particles 'feel' (marked with an 'X').

Forces:gravityweakE&Mcolor (strong)




Electrons, muon, tau









It appeals to a sense of symmetry (a surprisingly successful concept in particle physics) that there should be one more line
Forces:gravityweakE&Mcolor (strong)
'Dark Matter'




In addition, the history has strongly favored the discovery of new particles just when we think we've found them all.

Consider the example from 1936, after the identification of electrons, protons, and neutrons, all the particles needed to build atoms. Carl Anderson discovered the muon in cosmic rays (see Wikipedia: Muon). It was such a surprise that one physicist commented “Who ordered that?”

Science involves finding solutions to difficult problems and sometimes it takes many years. I suspect there were cranks and crackpots exploiting the gaps in our understanding in the case of the older discrepancies, just as creationists and EU advocates try to exploit the more modern problems that are at the frontiers of our current knowledge.

In spite of the claims of pseudo-scientists, real scientists did the work and eventually solved the problems. They also improved on the measurements, sometimes revealing new discrepancies. Today, experiments are running that measure neutrino oscillations by measuring neutrinos that pass through the Earth emitted by reactors around the world (so there is a calibrated source). For more examples of science that started out as astronomical observations, see "The Cosmos in Your Pocket: Expanded and Revised".

Comments illustrating more examples from physics and astronomy are welcome.