Sunday, June 24, 2012

Relativity Denial: The GPS 4-Satellite Solution

I occasionally receive queries by email where a substantial response is needed.  Rather than generating an extensive response seen only in email, I try to rework it into a blog post.  This is a misconception that recently arrived in my inbox.

I have often written about how the GPS system must have relativistic corrections included in the time-of-flight computation to compute the satellite-to-receiver range needed to determining the receiver's position (see GPS, Relativity & Geocentrism and Scott Rebuttal. I. GPS & Relativity).

The minimum number of satellites needed for a position determination is three (assuming your receiver has a reasonably accurate clock), so you can determine the receiver's three position components, x,y,z in cartesian ECEF coordinates (Wikipedia). 

If you have a fourth GPS satellite, then the mathematics demonstrates that the position computation can be done without the signal time from the receiver.

Misconception: If four GPS satellites are available, so that you don't need to know the time at the receiver, then relativistic corrections are not necessary.  This is evidence that relativitistic corrections are not really needed in the GPS system.

Why it is wrong: The relativistic corrections, as well as several other important corrections to the range computation, depend on the positions of the satellite(s) and the receiver.  These correction terms are in the fundamental range computation equation.  While you can use a fourth equation to eliminate the receiver time with an expression using the transmission time on the fourth satellite, the relativistic correction terms do not disappear, nor do they conveniently cancel.

Relativistic corrections remain important for accurate GPS position determination.

The Mathematical Details

The GPS Solution for Three Satellites

Using the time of the signal departing the satellite, t_s, and the time when the signal is received, t_r, we define what is called the pseudo-range, R, between the receiver, r, and the satellite, s

This is simply the distance that a radio signal, traveling at the speed of light, can travel in the time between the two clocks.  We call it a pseudo-range because it turns out that radio signal propagation between the satellite to the ground receiver is not that simple.  To deal with this, we define the true range, which I will designate with the greek letter rho, which is needed to compute the actual position of the receiver

To obtain the true range from the pseudo-range between satellite s and receiver r, a number of corrections must be applied

  • Time Measurments: Errors in clocks at satellite and receiver
  • Ionosphere:  A propagation delay due to the electron density in the ionosphere.  This delay is dispersive (Wikipedia) and determined by transmitting the GPS signal on two frequencies.
  • Troposphere: A non-dispersive propagation correction due to radio signal refraction (Wikipedia) in troposphere.  The effect is influenced by the water content of the atmosphere.
  • Tidal: A correction due to deformations of the Earth's surface due to tides (see International Earth Rotation and Reference Systems Service).
  • Multipath: Timing differences created by reflection of the GPS signal from nearby objects
  • Relativistic: Relativity corrections due to the motion and position of the GPS satellite as well as the motion and position of the receiver
  • epsilon: General measurement errors
Each of these corrections depends on the position of the receiver and the position of the satellite.  To simplify the manipulations, here we will consolidate them into one correction, delta, which is different for each satellite, s, and receiver, r, pairing.

If we want to solve the system of equations for three GPS satellites, we have:

Knowing the time at the receiver, t_r, and the time of emission from each satellite, t_{s1}, t_{s2}, t_{s3}, we can solve this system of equations for the position of the receiver, \vec{r_r}, which has three components, (x,y,z), to give the position in 3-dimensional space.

The GPS solution for Four Satellites
Now suppose we have a fourth satellite which is visible to our GPS receiver.  We have a similar pseudo-range equation for it.

Since we now have four equations and three unknowns, we have an overdetermined system. We can take advantage of this overdetermination to remove an input variable in the set of equations.  Since the GPS receivers tend to have the least precise clocks, we can use the fourth equation to eliminate the time of the receiver's clock from the other equations.  First, we manipulate the fourth satellite equation to give the receiver time:

Once we complete solving the system, we will use this equation to determine the receiver time.  By this method, we can determine the time at the receiver to a precision higher than could be determined with the receiver clock alone.  But to solve the system, we must first substitute this result into the first three equations, where the subscript si represents the other three satellites, with i=1,2,3.

Notice that all the corrections, delta, still must be included to solve the system.  In this form, we see that the correction for the receiver and satellite 4 must be added, while the corrections for the other three satellites must be subtracted.  All the corrections depend on the positions, and paths, between a given satellite and the receiver, and atop all this is a level of noise in the timing measurements. 

For relativistic effects to have no impact in the four satellite configuration, the correction between the receiver and satellite 4 must exactly match the correction between the receiver and ALL of the other three satellites.  Since all four satellites are at different positions relative to the observer (otherwise they would be colliding!), the chance that these corrections have identical numerical values is small,  so relativistic corrections remain important.

Solving for x,y,z

Some might wonder how we can solve such a set of equations, where the unknown quantities are mixed in with the known quantities.  We cannot reform the equations into a clean solution of the form where all the unknowns are on the left-hand side with all the knowns on the right-hand side, such as:

These interdependencies make the equations non-linear.  However, they can still be solved by iterative techniques, usually a Kalman Filter (Wikipedia).

Additional References

Sunday, June 17, 2012

Investing in the Future

Neil deGrasse Tyson - We Stopped Dreaming (Episode 1)‬

Neil deGrasse Tyson: "Audacious Visions"‬

NASA is an investment in the future.

Penny4NASA is circulating a petition advocating that NASA should be maintained at 1% of the Federal budget because it is an investment.  It is currently at about 0.5% of the Federal budget.

NASA does work that depends on doing the science right, from orbital mechanics, to space weather and understanding of the Sun and solar environment.  Note that NO ONE from the Electric Universe (Challenges Evaded by EU), Geocentrist (barycenters, 3-bodies), or Young-Earth Creationist (SED) groups has demonstrated competence in using their claimed science knowledge to do any of these things.

America, and in many cases the rest of the world, benefit from doing the science right.

Disclosure: In my day job I am employed by a contrator and do science support work at a NASA center.  I cannot and do not speak for any of these organizations.

Sunday, June 10, 2012

Electric Universe: Peer Review Exercies 5

This is the last of five posts devoted to providing a more professional peer-review of the "Special Issue" of the Bentham Open Astronomy Journal (BOAJ) devoted to Plasma Cosmology and Electric Universe (PC/EU).  While BOAJ claims to be a peer-reviewed journal, we'll see in the upcoming posts that the quality of the peer-review process for this issue was very questionable.  Each of the articles exposed in these reviews exhibit many fundamental errors in physics (especially electromagnetism) and astronomy.  Many of the unchallenged mistakes are at levels which could be identified by an undergraduate physics student or possibly even a competent EE undergraduate.

Review report by W.T. Bridgman and Nereid.
Quotes from the article discussed are in blue

Article Reviewed:
Toward a Real Cosmology in the 21st Century
by Wallace W. Thornhill

A large fraction of this article mentions many of the same topics raised in the David Smith article (see
Electric Universe: Peer Review Exercise 2) so some of our responses will be duplicated. 


"invention of ‘dark matter’ that responds to gravity but is electromagnetically undetectable. Matter is an electromagnetic phenomenon, so how is this possible?"
Matter is electrically neutral in bulk - when bound in atoms, or quasi-neutral (regions of small charge imbalance) as a plasma.  There are also subatomic particles which are electrically neutral, such as neutrinos and which are not composite structures.  Neutrinos were predicted in 1930 (named in 1933 by Fermi) and not actually detected until 1956.  It took over twenty years to develop the engineering capability to build the detector necessary because neutrinos DO NOT interact via electromagnetism.
"Instead the credit was given to George Gamow, a Big Bang advocate, despite his calculated CMBR temperature of 50K! [11]. That is an error in energy density of the universe of 10,000 times!"
The present day CMBR estimate is not in the Gamow paper Thornhill cites, though much of the groundwork was laid there.  Equation (3) of that paper, for computing the temperature at a given time in expansion, is same as modern result (compare to Peebles, "Principles of Physical Cosmology", pg 142).  The actual estimate in Alpher & Herman (1948) was 5K, based on the less-accurate input parameter values available in 1948 available for estimating the time in the early phase of the expansion.

Even if Thornhill were correct, an estimate of 50K is still only off by 50K/2.7K ~ 20.  If we look at the energy density using the REAL first estimate of CMB of 5K, we get the ratio of energy density of  (5/2.7)^4 ~ 12.  Within a factor of about 10 is pretty good for a first estimate with such uncertain initial data.  This is still much better than EU 'predictions'.

Since Mr. Thornhill is insisting on accurate numerical predictions from others, it is certainly fair to ask: Where are the EU numerical prediction of the microwave sky emission and uniformity? 

Where is the EU skymap prediction?
(Note the Lerner model cited by Thornhill in Section 4 below has already been demonstrated as flawed when compared to WMAP data.)

Again note that none of the papers in this EU 'Special Issue' have presented even one prediction that can be compared to actual measurements!  All attempts by others to make testable models from EU descriptions have generated results which differ from actual measurements by factors far larger than Thornhill's misquote above (see
Electric Cosmos: The Solar Capacitor Model. III).
"Eddington estimated the temperature a body in space would cool to if all of the energy it received were from star- light within the galaxy. He found it to be 3.18 degrees Kelvin (3.18K) [12]."
But these temperature estimates were based on mean-energy density converted to an 'effective temperature' - not a black-body estimate which applies to the CMB.  The differences between these estimates are described in greater detail at Ned Wright's Cosmology Tutorial: "Eddington's Temperature of Space"
"One expert has called into question both the theory and experimental detection of the CMBR. “ appears that many of the devices used as emissivity references …
This is a rather strange reference for EU theorists to use.  Dr. Robitalle is far from an expert in the topic.  Robitalle criticizes WMAP analysis techniques such as multiple measurements to reduce instrumental noise (used in many technologies, and by amateur astrophotographers) and extraction of unknown signals by identification and subtraction of foreground signals of known characteristics.  These are techniques routinely used in DSP (digital signal processing) by electrical engineers - facts apparently unknown to EU 'theorists'. 
As Geoffrey Burbidge noted, “This is why the Big Bang theory cannot be claimed to explain the microwave background or to explain a cosmic helium value close to 0.25” [16].
A more complete excerpt of this quote can be found at the Cambridge Catalog page.  Examination of the original papers Burbidge mentions do not match the values or circumstances which Burbidge quotes.

3 Discordant Redshifts

Discordant redshifts should be readily apparent in large catalogs like SDSS and the 2dFRS catalog.  How do they manage so effectively mimic a magnitude-limited, uniform spatial distribution (see
Quantized Redshifts XI. My Designer Universe Meets Some Data and What's Next...)?

Simple uniform cosmos model redshift distribution compared to 2d FGRS & SDSS
The data for these types of analyses are freely available and generating distribution plots of model universes is fairly simple, as noted above.  Why hasn't Arp or EU supporters done this very basic data analysis exercise?

Quoting Sagan

“If Arp is right, the exotic mechanisms proposed to explain the energy source of the distant quasars—supernova chain reactions, super-massive black holes and the like—would be unnecessary. Quasars need not then be very distant” [24].
Arp's solution has galaxies popping out 'baby galaxies' like wet Mogwai! (Wikipedia)  What is the mechanism for this?  Most sources describing this are inconsistent with any described EU galaxy model, so how could this be evidence for EU 'theories'?

This claim raises the same issues as I dealt with in the Electric Universe: Peer Review Exercise 2.  If you are considering galaxies with larger redshifts 'in front of' lower-redshift galaxies, take a close look at another Hubble Image
Rich Background of Galaxies Behind the Tadpole Galaxy

and answer these questions:
  • Are the tiny spiral galaxies in this image tiny foreground galaxies, or distant background galaxies?
  • How do you tell?  What is the mechanism to objectively make the distinction?
You can do this exercise with almost any Hubble image taken with the newer cameras.  More info at Halton Arp's Discordant Redshifts.

"The simplest answer, from the highly successful field of plasma cosmology, is that it represents the natural microwave radiation from electric current filaments in inter- stellar plasma local to the Sun.'  “A simple, inhomogeneous model of such an absorbing medium can reproduce both the isotropy and spectrum of the CBR within the limits observed by COBE, and in fact gives a better fit to the spectrum observations than does a pure blackbody” [25]
Lerner's analysis covers only the sky-averaged spectrum from FIRAS, not the actual CMB map such as those generated by COBE and WMAP (WMAP Data Products).   Like Peratt, Lerner states that he expected cylindrically symmetric filament structures to be visible.  WMAP killed this model as the filamentary structures through the galaxies are not seen, even for the nearest galaxies (see Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background).
"But all filamentary plasmas generate microwaves."
Which means that if EU hypotheses were correct, we should see the galaxy, and even the star-powering current streams in WMAP data.  Yet we do not.  Why?
"Fig. (1) is an adaptation of Verschuur’s Neutral Hydrogen Filaments at High Galactic Latitudes [31]. The HI plasma filaments are formed by the scavenging action of interstellar Birkeland currents flowing in our galaxy."
How are HI regions, which are electrically neutral and must be very cold to allow the hyper fine transition to dominate emission, be tracers of currents, which must be very hot to have ionization?
Incidentally, the 21 cm HI transition photons have yet to be measured in the laboratory.  The transition, and its probability, is a theoretical prediction derived from the known energy levels of the hydrogen atom.

"The problem is that there don't appear to be enough radio galaxies to account for the signal ARCADE detected. “You'd have to pack them into the universe like sardines,”
As is common in pseudo-science practice, Thornhill cites a press release, instead of the primary paper.   While the ARCADE results are still a subject of ongoing research (see papers citing the original publication), a number of these papers suggest solutions which require only minor revisions to the standard cosmology.  For Thornhill to claim that the ARCADE result is evidence for EU claims is like claiming the errors in weather forecasting are due to the assumption that the Earth is round and could be fixed by using a flat Earth!

Where is the numerical prediction of the radio emission expected from all the current streams required in the EU model?  Peratt and Lerner, as noted by EU supporters, claim the current streams are as bright as the CMB, but we don't see the streams in the WMAP maps.  This places their analysis back to square one.

5. Plasma Cosmology

"On the cosmological scale this brings us to the subject of plasma cosmology [34], a laboratory-testable theory"
How will they build an entire star or galaxy in the laboratory without using the exact same computational techniques that have been used for mainstream astronomy?
"“Big Bang proponents have won the political and funding battle so that virtually all financial and experimental resources in cosmology are de- voted to Big Bang studies. Funding comes from only a few sources, and supporters of the Big Bang dominate all the peer-review committees that control the funds. As a result, the dominance of the Big Bang within the field has become self-sustaining, irrespective of the scientific validity of the theory.”"
Replace "Big Bang" with Newton's Laws and Biblical Geocentrists can use this claim as well.  This is a whine common to every pseudo-science, and pseudo-scientist.
"Plasma cosmology is a good theory because it is predictive and empirically testable. It accommodates new discoveries without resort to ad hocery and inventions of new forms of matter or energy."
Yet neither Mr. Thornhill nor other advocates of Plasma Cosmology (or EU) have presented any 'predictions' of their theories that can be tested against actual measurements.  That is the standard that must be met by the scientists they criticize - yet EU does not (and cannot) meet this standard.  Most of their 'evidence' consists of "X looks like Y so X must be Y" (Wikipedia: Pareidolia). 
"Alfvén notes that “relativistic DLs in interstellar space may accelerate ions up to cosmic ray [TeV] energies” [38]"
Technically true, in areas where DLs (Wikipedia) can form.  However EU advocates often do this by creating DLs in random areas where there is no mechanism to form them.   Note the Peratt quote in the Wikipedia article:  
"Since the double layer acts as a load, there has to be an external source maintaining the potential difference and driving the current. In the laboratory this source is usually an electrical power supply, whereas in space it may be the magnetic energy stored in an extended current system, which responds to a change in current with an inductive voltage".
 Can an EU 'theorist' describe the conditions under which a double layer can form in nature?
"Plasma cosmology has no such difficulty because electrical power is being delivered from the rotation of the galaxy to form the DLs"
This model completely conflicts with Peratt galaxy model.  Where is the model to compute the power in the DL from galaxy rotation?  What are the input parameters?


“the evolution of the solar system as a whole is chaotic, with a time scale of exponential divergence of about 4 million years”
Even the authors of the paper cited expressed concern that this may be a numerical artifact and not real.
If mass is an electromagnetic variable dependent on the distribution of matter and charge, then ‘G’ is different for every celestial body!
Since charges are subjected to such strong forces, far stronger than gravity as noted by Mr. Thornhill, then objects with significant, say negative charges, will disperse the charges due to mutual repulsion (unless a stronger force holds them together - if so, from where?) so the charge of each celestial body can easily change over time.  In this case we would observe significant orbit changes of the celestial bodies from Keplerian motion.

Has Mr. Thornhill computed the trajectory of any celestial configuration or interplanetary mission with this 'model'?  Present proof that this computation has been done and generates the same result as the standard 'assumption' of constant 'G'.

If 'G' varies for every celestial body, then how is it that NASA and other space agencies can use the 'assumption' of a constant G to successfully navigate between planets?  Is Mr. Thornhill suggesting that interplanetary flights are somehow faked?

If Mr. Thornhill is so bold as to actually attach some NUMBERS to this claim, I will be happy to run the simulation on my N-body code which I have written for just such experiments.

Einstein ‘postulated away’ the æther while Maxwell’s theory of light waves requires it.
Maxwell assumed an aether because other wave phenomena known in Maxwell's day, such as sound and water, required it.  It was a mental tool that Maxwell used to examine the processes but which was later demonstrated unnecessary.
Gravity is an indirect cause of bending of light paths by ponderable bodies.

Cahill writes, “the Einstein postulates have had an enormously negative influence on the development of physics, and it could be argued that they have resulted essentially in a 100-year period of stagnation of physics” [47]
It should be noted that physicists and engineers who understand relativity built the GPS system (see GPS, Relativity & Geocentrism and Scott Rebuttal. I. GPS & Relativity) and other technologies that require high time precision. 

Where are the technologies built using the 'interpretation' of those who deny relativity? 

When it comes to the GPS system, their 'theories' essentially amount to violating the documented instrument specifications.

A near-instantaneous electric force is also required to maintain coherence within all subatomic particles and within the atom. That this obvious requirement has been overlooked is surprising.
This is declared with no citation or even evidence.  Where's a reference?  This claim is dubious considering these were the types of models examined immediately after the Rutherford experiments indicated the nucleus was much smaller than the atom.

7. Stars in an Electric Universe

The hourglass-shaped stellar electromagnetic (Bennett) pinch can be recognized in planetary nebulae, where the ‘dark current’ mode of a plasma circuit becomes visible in ‘glow mode.’ The electrical model of stars has that pinch operate continuously from the birth of the star. Stars are not isolated in space, they have a galactic electrical energy source.
Birkeland current model requires inflow and outflow on opposite sides of the star.  Actual Doppler measurements from spectra show ONLY outflows from the star.  Where is the incoming current?

The 'dark current' is still an emitter of radio and microwave emission, by Thornhill's statements above.  Where are these current streams in the radio map?

The axial beaded rings structure of Supernova 1987A shows that novae are an electrical phenomenon involving exploding plasma double layers and electrical ejection of stellar matter. They are not due to anything happening inside the star [55]. There can be no ‘neutron star’ remnant [20].
Neutrinos were predicted and detected from SN 1987A (see also Neutrino Oscillations: Yet Another Blow Against Non-nuclear Stellar Energy...)
Where is the neutrino prediction from the EU stellar model?  How is it computed?
Stars are born with a dense body of heavy elements (revealed in spectra of supernovae) and an extensive upper atmosphere of hydrogen and helium.

All stars produce heavy elements in their photospheric plasma discharge, principally by neutron capture.
Where do the free neutrons come from?  By which reaction? 

Considering Thornhill et al often complain about the short lifetime of free neutrons, these neutrons cannot just be floating around for very long, waiting for capture.   This is a problem for EU which they have yet to acknowledge.

Yet a star was recently discovered that “shouldn’t exist” because it is too big to be inflated by a central fire [58].
The 150 solar mass limit has a number of caveats, depending on the evolutionary history of the star.  There are stellar histories that can bypass this limit.
Yet plasma physics imposes severe constraints on the current of a stream where the z-pinch will kill the current. 
How does EU deal with the current constraints for z-pinches identified by Alfven and revised by others?

8. The Electric Sun

A star can be defined as a body that satisfies two conditions: (a) it forms the anode focus of a plasma glow discharge; (b) it radiates energy supplied by an external source.”

The tufted plasma sheath above the stellar anode seems to be the cosmic equivalent of a ‘PNP transistor,’ a simple electronic device using small changes in voltage to control large changes in electrical power output.
Where does the control voltage come from?  What controls it?
Figure 7. 
No units.  No data values. This figure is scientifically useless.
The small but relatively constant accelerating voltage gradient beyond the corona is responsible for accelerating the solar wind away from the Sun.
Only small-scale electric fields have been measured in the solar wind:
A global scale electric field can be set up by the solar wind due to dynamic interactions of the ions and electrons whose bulk motion is outward from the Sun, but the voltage is on the order a 1000 volts between the photosphere and 1 AU (Kinetic Physics of the Solar Corona and Solar Wind).

These voltages and their corresponding currents have nowhere near enought power to explain the total luminosity of the Sun or other stars.

If a strong external electric field existed which drove the positive ions outward, it would be strong enough to force the lighter electrons towards the Sun, and very fast since they are so light relative to protons.

In situ satellite measurements reveal the ions moving outward from the Sun, and the electrons (on average) moving outward, consistent with the mainstream solar wind models and contrary to the EU model.

The standard thermonuclear star theory has no coherent explanation for the approximately eleven-year sunspot cycle.
False: Solar dynamo models generate predictions testable against actual measurement (see The Solar Dynamo: Toroidal and Radial Magnetic Fields).

Where are the numerical predictions of the solar cycle from the EU models?

The current must close at large distances (B3), either as a homogeneous current layer, or — more likely — as a pinched current. The Birkeland current (B2) signature may have been discovered by the Ulysses space- craft as unexpected variations in the magnetic field above the solar pole [66].
It's not clear why Thornhill is claiming as the reference (GRL 2002) has NO mention of Birkeland currents.  The authors attribute the magnetic holes and decreases are due to a form of Alfven wave.  Short of Thornhill demonstrating how a Birkeland current could explain the magnetic intensities and directions (and contain sufficient power to explain the solar luminosity), this claim appears to be made up.

9. Electric Stars

Figure 12: 
No units.  No data values.  This figure is scientifically useless.
 the current density is responsible for both the luminosity (y-axis) and the color temperature (x axis) of the H-R diagram. That explains the near 45 ̊slope of the so-called ‘main sequence’ stars in the corrected H-R diagram (right)

At the lower left-hand end of the main sequence we find the red dwarfs — stars under low electrical stress, in which a good deal of the red light comes from the chromospheric anode glow. Anode tufting or flaring is sparse, if any, and may occur preferentially at the magnetic poles.
What is "electrical stress"?  What parameters go into this measure?  How is it computed?  What values correspond to 'high electrical stress' and 'low electrical stress'?

Why is there a gap between white dwarfs & main sequence?  If this is a separation due to a 'dark current', then there should be a population of stars that only emit radio waves corresponding to this range of luminosities and the current value should consistent with a 'dark current'.  What numerical values correspond to this range? 

At the top right of the main sequence the light from the tufts is the electric blue of a true arc, and the stars appear as ‘blue giants’ — intensely hot objects considerably larger than our Sun. These blue giants tend to be concentrated on the central axes of our galaxy’s spiral arms, where the interstellar Birkeland current density is highest.
Stellar emission is dominated by a black-body spectral profile.  Electrical arcs are not. (see Electric Universe: Peer Review Exercise 2 for a solar spectrum in visible light).  Here is a plot of the WHI standard solar spectrum from 2008 (LISIRD) which emphasizes spectral lines in the ultraviolet and x-ray range. 
Click to enlarge.  Red lines mark the wavelengths of AIA cameras on SDO.  The wavelength scale is logarithmic to show detailed structure in the ultraviolet to x-ray region.  The blue curve marks the blackbody spectrum corresponding to the effective temperature of total solar energy flux.  Note the majority of the energy corresponds to this curve.
Where is discharge experiment which produces this spectrum (including the lines and relative intensities of the lines?)
Cosmic Birkeland currents operate for the most part in ‘dark’ mode.
'Dark mode' is an archaic term indicating no visible light is emitted.  Laboratory experiments have demonstrated that these mode do emit in radio and microwaves.  In the EU model, there should be radio emitting current streams detected passing through every star.  We have full-sky surveys across a broad band of wavelengths, including radio and microwave, and see no such structures.  Where are they?
Engineers find it easy to light our cities with electrical power generated at some distance from the city.
Where are the generators for the cosmic-scale process?  Where do they get their power? 

This is the cosmic-scale problem of EU and PC that is always evaded.  How can they claim their model works at all when they have nothing but failures applying their models on larger scales?


  • The entire 'paper' is a mishmash of speculations, many which conflict with each other, and none that provide numerical predictions which can be compared to actual measurements.  This paper is the EU equivalent of the creationist's 'Gish Gallop' (RationalWiki).
  • General sloppy documentation of references which actually back their claims, if they exist at all. (Credible Hulk Angry!)
  • Like many EU 'theorists', Thornhill claims that any problem with the standard model is automatically evidence for the EU model.  Of course, every crank uses many of the same evidences for their models as well, so how do you tell the EU models apart from those of the other cranks?  So far, you can't!
  • The problem areas they claim is evidence for the EU models are often small compared to the body of evidence that is well-understood.  It's as if one used the problems in understanding hurricane or tornado formation as evidence that the Earth isn't round.
How can EU be a cosmology of the 21st Century when its understanding of science is incapable of moving beyond the 19th Century?
Note: Comments that DIRECTLY address the points in THIS post are favored.  Since there will be a post on each of the five papers in the EU 'Special Issue', comments more relevant to one of those other papers should await that specific post.

Update June 11, 2012: fixed a bad link and minor grammatical errors.

Sunday, June 3, 2012

Electric Universe: Peer Review Exercise 4

This is the fourth of five posts devoted to providing a more professional peer-review of the "Special Issue" of the Bentham Open Astronomy Journal (BOAJ) devoted to Plasma Cosmology and Electric Universe (PC/EU).  While BOAJ claims to be a peer-reviewed journal, we'll see in the upcoming posts that the quality of the peer-review process for this issue was very questionable.  Each of the articles exposed in these reviews exhibit many fundamental errors in physics (especially electromagnetism) and astronomy.  Many of the unchallenged mistakes are at levels which could be identified by an undergraduate physics student or possibly even a competent EE undergraduate.

Review report by W.T. Bridgman and Nereid.
Quotes from the article discussed are in blue

Article Reviewed:
Laboratory Modeling of Meteorite Impact Craters by Z-pinch Plasma

by C. J. Ransom

This paper exhibits a number of shortcomings which would explain why it could not be accepted at a journal with a more rigorous peer-review process.

Here's a few of the questions immediately raised by this paper which the author does not answer but which are well established by impact researchers (see General References).
  • What energies were needed to form pits of a specified size?  There is no graph of the relationship of input energy and crater diameter and/or depth.
  • What is the detailed crater profile is created by the arc?  How does it vary with current and/or voltage?
  • The author ignores the implications of their arc formation model for large-scale cratering.  How much energy is needed to produce craters of large size (10-100 km diameters)?  Where does the energy come from?
  • We have lots of examples of objects moving around the solar system that can impact with other objects and can release many megaton equivalents of energy at a localized point of impact such as Shoemaker-Levy 9 (wikipedia).  We have no such examples of electric discharges or arcs that can deliver an equivalent energy density on such a large scale.
  • The author mentions the problem of corona crater formations on Venus, but his reference is a bit out of date:  See Subduction initiation by thermal chemical plumes: Numerical studies
General References
Note: Comments that DIRECTLY address the points in THIS post are favored.  Since there will be a post on each of the five papers in the EU 'Special Issue', comments more relevant to one of those other papers should await that specific post.