Tuesday, August 31, 2010

'Out There' Astrophysics Impacts Technology (again)

A favorite staple of high-tech science fiction, the gigantic laser weapon, may have some limitations imposed by fundamental physics.

Physics Central: Lasers reaching their limit

In 1997 at the Stanford Linear ACcelerator (SLAC), electrons with 47 GeV (giga-electron volts) of energy were collided with the beam of a green laser.   The high-energy electrons collide against the photons at an angle that transfers energy to the photons in the laboratory rest frame, a process known as the inverse Compton Effect.  This increased the energy of the laser photons from the green wavelength of light up into the gamma-ray range.  These gamma-ray photons subsequently collided with other low energy photons in the laser beam, creating electron-positron pairs.  The two photons collided with enough energy in the center-of-momentum (CM) frame that their combined energy exceeded 1.1MeV (million electron volts), the threshold for pair production.  This was the first time photon energy was directly converted into matter.  It is the inverse process of electrons and positrons colliding to form gamma-rays.  [NY Times: Scientists Use Light to Create Particles, 4]

It is now becoming clear that above some photon energy density, this pair-production process can happen spontaneously - enough photons will have energy above the threshold that they will start a cascade of pair production,  followed by pair annihilation, followed by pair production...  This would suggest there is a quantum-imposed limit to the energy density of lasers[1].

While this might not seem to be an astrophysics issue, one needs to investigate the history.  I mentioned some of this in an earlier post (see Testing Science at the Leading Edge)

When antimatter was first discovered in 1932, with the identification of the positron, we had the first experimental verification of the process of matter-antimatter annihilation, where the collision of an electron and positron would produce two photons (with no other particles around, at least two photons are required to conserve momentum).

One of the heavily tested (but by no means proven) fundamental principles of physics is that sub-atomic processes are reversible in time.  It is a principle that has been tested in many cases and found to hold, but it has not been demonstrated as an absolute.  However, it holds so well that it is generally assumed valid for interactions where it has not yet been tested.  If an opportunity arises where it is tested and fails, there will undoubtedly be a Nobel prize for that researcher. 

So if an electron and positron can collide to produce two photons, by time-reversal symmetry, it stands to reason that one can collide two photons of sufficient energy (in excess of 1.1 MeV) and create an electron-positron pair.  The probability for such a reaction was first calculated in 1934, shortly after the discovery of the positron, by Breit and Wheeler[2,3].  This reaction probability was sufficiently small that no one in the 1930s had the technology to test it, so it remained an interesting concept.

But in the 1960s, x-ray detectors (wikipedia, NASA/GSFC) launched on board rockets above the Earth's atmosphere (which was too thick for cosmic x-rays to penetrate) began detecting high-energy point sources in space.  Gamma-ray detectors would detect photons with energies in excess of the 1.1 MeV threshold and the question arose as to what could produce these high-energy photons (wikipediaNASA/GSFC).

One of the processes recognized as a possible source of these photons were extremely high temperature plasmas of electrons, positrons, and photons, also called a pair plasma.  Here are just a few of the papers published studying the environment created by such as plasma.

    •    1964, Neutrino Processes and Pair Formation in Massive Stars and Supernovae
    •    1979, Photon Pair Production in Astrophysical Transrelativistic Plasmas
    •    1981, Annihilation radiation from a hot e/+/-e/-/ plasma
    •    1982, Relativistic thermal plasmas - Pair processes and equilibria
    •    1983, Radiation spectrum of optically thin relativistic electron-positron plasma
    •    1984, Spectra from pair-equilibrium plasmas
    •    1995, Thermal Comptonization in Mildly Relativistic Pair Plasmas

Astrophysicists have been exploring this type of plasma environment for thirty years, prior to verification of the process in the laboratory, based only on the extrapolation of some very fundamental physical principles. 

There are a surprising number of phenomena where a fundamental principle has been subjected to pretty heavy testing at current laboratory scales: energy-momentum conservation, time reversibility, Lorentz invariance, the wave function properties of fermions and bosons, etc.  Astrophysicists have occasionally explored the extreme limits of these principles and obtained some unusual predictions.  For example, the fact that electrons and neutrons are fermions (no more than two can occupy the same quantum state at the same time, AKA the Pauli Principle) implies that there high density configurations where an object can be held up by the 'pressure' created by this limit.  Computations demonstrate that such objects would have sizes and masses consistent with white dwarf and neutron stars.  I'm still assembling some of the fascinating nuclear physics surrounding these ideas.

References
  1. Limitations on the attainable intensity of high power lasers
  2. G. Breit and J. A. Wheeler. Collision of Two Light Quanta. Physical Review, 46:1087–1091, December 1934. doi: 10.1103/PhysRev.46.1087.
  3. M. S. Plesset and J. A. Wheeler. Inelastic Scattering of Quanta with Production of Pairs. Physical Review,  48:302–306, August 1935. doi: 10.1103/PhysRev.48.302.
  4. D. L. Burke, R. C. Field, G. Horton-Smith, J. E. Spencer, D. Walz, S. C. Berridge, W. M. Bugg, K. Shmakov, A. W. Weidemann, C. Bula, K. T. McDonald, E. J. Prebys, C. Bamber, S. J. Boege, T. Koffas, T. Kotseroglou, A. C. Melissinos, D. D. Meyerhofer, D. A. Reis, and W. Ragg. Positron Production in Multiphoton Light-by-Light Scattering. Physical Review Letters, 79:1626–1629, September 1997. doi: 10.1103/Phys-RevLett.79.1626.

Saturday, August 21, 2010

Electric Universe: Real Plasma Physicists BUILD Mathematical Models

In the previous post on plasma modeling, I challenged Electric Universe (EU) supporters on their use (or lack of use) of mathematical models which can actually be tested.  The best that the EU boosters could come up with was work by Hannes Alfven “Cosmical Electrodynamics” (1963), “Cosmic Plasma” (1981) and Anthony Peratt's “Physics of the Plasma Universe” (1992).  Another popular book quoted by EU supporters is James Cobine's “Gaseous Conductors” originally published in 1941!

While these are certainly excellent texts on the fundamentals of plasma physics, they are considerably dated in terms of modern techniques of mathematical analysis, plasma simulations, and plasma diagnostics, especially when it comes to using such processes as spectra of ions and atoms in the plasma to determine physical conditions.  Plenty of researchers have improved experimental and theoretical techniques in the approximately 20+ years since these books were published.

Then there is the whole issue of the growth of computing power.  Wasn't Peratt's original galaxy model run on a machine around the mid-1980s?  In 1986, the Cray X-MP had a speed of about 1 GFLOP.  Depending on the benchmarks, modern commercial-grade desktop computers are timed at 30-40 GFLOPs (Wikipedia: Xeon processors).  Even desktop class machines are being combined in ways to create even more powerful multiprocessing clusters (Wikipedia: Xgrid, Beowulf).  ES supporters cannot claim lack of access to reasonable computing power for their own plasma models (if they actually exist).

There has been significant laboratory and theoretical research on plasmas in nested spherical electrode configurations (similar to some Electric Sun models such as the one I call the Solar Capacitor Model) in the years since Cobine was published.  This work was usually related to efforts in developing mechanisms for controlled fusion.  Here's just a few of the papers I've found specifically that examine this configuration:
  • C. B. Wheeler. Space charge limited current flow between concentric spheres at potentials up to 15 MV. Journal of Physics A Mathematical General, 10:1645–1649, September 1977. doi: 10.1088/0305-4470/10/9/017.
  •  L. J. Sonmor and J. G. Laframboise. Exact current to a spherical electrode in a collisionless, large-Debye-length magnetoplasma. Physics of Fluids B, 3:2472–2490, September 1991. doi: 10.1063/1.859619.
  • A. Ferreira. Fokker-Planck solution for the spherical symmetry of the electron distribution function of a fully ionized plasma. Physical Review E, 48:3876–3892, November 1993. doi: 10.1103/PhysRevE.48.3876.
  • A. Amin, H.-S. Kim, S. Yi, J. L. Cooney, and K. E. Lonngren. Positive ion current to a spherical electrode in a negative ion plasma. Journal of Applied Physics, 75:4427–4431, May 1994. doi: 10.1063/1.355986.
  • E. S. Cheb-Terrab and A. G. Elfimov. The solution of Vlasov’s equation for complicated plasma geometry. I. Spherical type. Computer Physics Communications, 85:251–266, February 1995. doi: 10.1016/0010-4655(94)00144-Q.
  • V. Y. Bychenkov, J. P. Matte, and T. W. Johnston. Nonlocal electron transport in spherical plasmas. Physics of Plasmas, 3:1280–1283, April 1996. doi: 10.1063/1.871752.
  • O. A. Nerushev, S. A. Novopashin, V. V. Radchenko, and G. I. Sukhinin. Spherical stratification of a glow discharge. Physical Review E, 58:4897–4902, October 1998. doi: 10.1103/PhysRevE.58.4897.
  • F. Cornolti, F. Ceccherini, S. Betti, and F. Pegoraro. Charged state of a spherical plasma in vacuum. Physical Review E, 71(5):056407–+, May 2005. doi: 10.1103/PhysRevE.71.056407.

Going Non-Linear...
One of the popular complaints from ES advocates that my analyses are not treating the 'non-linear' aspects of the Electric Sun model. If that is their complaint, you'd think ES advocates would be all over this paper:
  • S. Xu and K. N. Ostrikov. Series in vector spherical harmonics: An efficient tool for solution of nonlinear problems in spherical plasmas. Physics of Plasmas, 7:3101–3104, July 2000. doi: 10.1063/1.874166.
EU supporters should be able to use the results of this paper to demonstrate a ES model against actual MEASUREMENTS.  Yet in the ten years since its publication, I can find nothing but excuses.  Instead, the EU supporters keep using 'non-linear' the same way creationists use “God did it”, as a magical incantation which frees them from doing any actual WORK that could really be called SCIENCE.

Why don't we see any of the results of the works above (and the many others available) in support of Electric Sun models?  Perhaps it is because
  1. The models did not generate any results that would support ES?
  2. The experiments did not generate any results that would support ES?
  3. The papers didn't have any good 'quote mines' which EU supporters could spin into alleged support for ES?
  4. EU doesn't know about them because they aren't doing any actual research, or 
  5. All of the above?
I vote for 5.

Coming Soon: "Plasma Modeling for Fun AND Profit"!

Saturday, August 14, 2010

On Dark Matter. I: What & Why?

This post is a distillation of some e-mail discussions I have had on this topic.

Some (but not all) young-earth creationists (YEC)  deny the existence of Dark Matter because in galaxies and clusters of galaxies, it is needed to keep these systems gravitationally bound over cosmological times of billions of years.   Since YECs need a young universe, less than 10,000 years old, such long time scales are not needed so, according to them, Dark Matter is not needed.  Their explanation is that the structures we see were created in their present form by a deity and have not had time to undergo any detectable change.

Electric Universe (EU) supporters deny the existence of Dark Matter under the justification that galaxies are powered by giant Birkeland currents this mechanism explains the rotation curves of galaxies.  These currents are as yet undetected in spite of the fact that WMAP had more than enough sensitivity to detect the synchrotron radiation Dr. Peratt claimed they should emit.

Some popular-level treatments of Dark Matter:
365 Days of Astronomy Podcast, Dark Matter: Not Like the Luminiferous Ether, by Rob Kno
Dark Matter: A Primer

What is “Dark Matter”?
“Dark Matter“ is a generic term used for something which we currently don't know precisely what it is.  Once we know what it is and detect it directly, it will certainly be renamed.  Its most general description is matter which can be detected by its gravitational influence but not (as yet) by more direct means such as emitted light.

Over the years, its observational definition has changed as refined instruments made it possible to identify some non-lumininous or low-luminosity components of dark matter with known objects and processes.

- MACHOs: non-luminous stellar-scale objects detected as part of the MACHO project

- Ionized hydrogen: free protons (positive hydrogen ions, sometimes called HII by astronomers) have no spectrum.  However, because ionizing hydrogen contributes an equal number of free electrons in the intergalactic medium, it can alter the balance of ions of other elements which have spectra we can detect.  This relationship allows us to infer the amount of ionized hydrogen in the IGM

- Neutrinos: For a number of years, neutrinos with mass were regarded as the prime candidate for dark matter.  As solar neutrino and ground-based experiments of neutrino oscillations placed smaller and tighter limits on the mass and other characteristics of the neutrino, it was eventually realized that neutrinos could be only part of the non-baryonic Dark Matter problem.
Dark Matter hasn't been demonstrated in the laboratory, so why believe it exists?

Many things were 'known' before they could be clearly demonstrated in the laboratory.  In many cases it was possible to devise indirect tests which were used to narrow in on the details.  This information was then used to refine future experiments in techniques for direct detection.  Not all of these problems existed in distant space.
  • From about 1920 to 1932, atomic physicists could not explain why most atoms were about twice as massive as the protons they contained.  They knew there was something that made up for the mass difference and primary speculation was some type of tightly bound proton-electron configuration, but those types of models did not produce good results.  The answer would await the discovery of the neutron in 1932, which did not interact by the electromagnetic force.  I have yet to find any papers predicting the existence of a neutral particle with a mass approximately that of the proton.
  • From 1933 to 1954, nuclear physicists had great success calculating nuclear reaction rates using a hypothetical particle they called the neutrino.  The neutrino salvaged conservation of energy and explained why electrons emitted in beta decay did not have a fixed energy (characteristic of a 2-body decay process) but exhibited a range of energies up to the maximum allowed by energy conservation, characteristic of a many-body decay process.  The neutrino would not be detected directly until 1954.  The neutrino did not interact electromagnetically or via the strong nuclear force.
  • The 1/r^2 force law of Newtonian gravity was not demonstrated at laboratory scales until the 1990s.  The real precision in defining the Newtonian gravitation force was established primarily through observations and precise measurements of planetary motion done years before we could actually travel in space.  If U.S. science required a definition of strict laboratory demonstration of Newtonian gravity before launching our first ballistic missiles or orbiting satellites, the Soviet Union would have kept their lead in spaceflight.
 In addition, astronomy has a rather successful history of detecting things first by their gravitational influence and confirming the objects later as detection technology improved.  Consider these examples from the history of astronomy:
  • The planet Neptune could be considered as the first example of 'dark matter', detected gravitationally before seen optically.  We didn't know that the planet had to exist, we only observed discrepancies in the orbit of Uranus and inferred the existence of a planet based on the understanding of gravity that existed at the time.  Alternatives, such as an extra term in Newton's gravitational force law, were examined as well.
  • Perturbations in the motions of the stars Sirius and Procyon, detected in 1844, were due white dwarf stars too faint to be seen by telescopes of the day.  It took 50 years for telescopes to improve their sensitivity to a level that these small, faint stars could be detected close to a bright primary star.  For 50 years, these stars were 'dark matter'.  We would later determine that these white dwarf stars hinted at another state of matter that existed at densities too high to be produced in current laboratories.
  • Perturbations in the spectral lines of distant stars has been most recently used to detect extrasolar planets since 1995.  These perturbations are due to the gravitational influence of the orbiting planet on its parent star.  Only recently have some of these planets been imaged directly.
Just as in these historical examples, we know there are limits in our ability to detect some processes and particles.  If a problem is solved by known processes that operates below our current detection threshold, then these are reasonable lines of research to pursue (dark matter, proton-proton reaction).  However, if the suggested solution indicates that the process is well within the detection threshold of current technology, that is most likely a dead end for research (see “Testing Science at the Leading Edge".

To be continued...
Minor typo fixed.  Thanks to the commenter who caught it.

Blogspot problems posting comments

Over the past few weeks, there have been an annoying number of problems posting comments, where the comment system reports an error message, suggesting the comment was rejected, but actually accepting the comments.

This issue has been reported by multiple users in the blogspot help forums but does not appear to have been fixed, at least as of yesterday evening.  There have been a number of changes made in the comment moderation software and this may be related.

Hopefully it will be fixed soon. 

In the meantime, if you get an error message starting with "URI too long..." while posting a comment, odds are good that the comment was recorded by the system.

My apologies for the aggravation.  It happens to me too. 

Saturday, August 7, 2010

Electric Universe: Real Plasma Physicists Use Mathematical Models!

One of the problems with Electric Universe (EU) claims is they seem incapable of producing mathematical models that can be used by other researchers to compare the predictions of their theories to other observations and experiments.  The common EU excuse is that plasma behavior is too complex to be modeled mathematically.  But that excuse reveals an almost schizophrenic mindset of the EU community.

One of the heroes of the EU supporters is Hannes Alfven (Wikipedia).  They rarely mention Alfven without mentioning that he was a winner of the Nobel Prize in Physics in 1970 (Nobel) and that this gives him more credibility than other researchers.  However, Alfven is not the only winner of the Nobel prize.  There are laureates back to 1901 (Nobel Physics Laureate List), including a number of prizes related to astrophysics:
  • 1951: John Cockcroft and Ernest Walton for studies in the transmutation of the atomic nucleus.  Much of this effort was driven by George Gamow's (wikipedia), theoretical work on quantum tunneling for the nuclear reactions needed to power the stars.
  • 1967: Hans Bethe (wikipedia) for solving the problem of stellar nucleosynthesis, building the light elements from hydrogen by a series of fusion reactions.  Bethe did this work in 1939.  A few years later he would be leading the theory group at Los Alamos as part of the effort to build the first atomic bomb.  He would later lead the theory group for the development of the hydrogen bomb.
  • 1983: Subramanyan Chandrasekhar and William Alfred Fowler for their work in nuclear astrophysics
  • 1993: Joseph Taylor and Russell Hulse for demonstrating tests of general relativity in the binary pulsar.
  • 2006: John Mather and George Smoot for the COBE measurements of the Cosmic Microwave background
Many of these other winners are for achievements which EU claims are not valid science.  So what makes Alfven's claims about plasma cosmology more valid when he was given the award for the development of magnetohydrodynamics (MHD), NOT his work on plasma cosmology?

So how does Alfven's Nobel Prize for MHD give plasma cosmology more credibility than the Nobel Prizes received by others FOR work on the standard cosmology?  Is the prize Nobel or ignoble? 

But what about MHD?  Just what is MHD? MHD is a set of mathematical equations (Wikipedia) which describes the behavior of certain classes of plasmas.  MHD works best for dense plasmas where the mean-free-path of the charged particles (the average distance between particle collisions) is small compared to the gyro-radius (the radius of the orbit of the particle in the magnetic field) of the particles.  This means the plasma behaves much more like a fluid (hence magnetoHYDROdynamics).

    •    Magnetohydrodynamics at Scholarpedia.
    •    Computational Magnetohydrodynamics at Wikipedia
    •    Plasma Modeling at Wikipedia
    •    Plasma Physics at Wikipedia

Alfven's accomplishments in astronomy did earn him the Gold Medal of the Royal Astronomical Society in 1967 and he won the Nobel prize for MHD which is used actively in astronomy today (including cases with less than infinite conductivity).  The chronic EU claim that Alfven was ignored by the astrophysical community doesn't hold up to the facts.  Like all scientists, Alfven had ideas that worked and ideas that didn't.  His ideas that actually worked were clearly adopted and appreciated by the astrophysical community.

Most of the negative things about Alfven seem to focus around a tendency to cling too much to ideas such as Plasma cosmology that were clearly failures.  One of the greatest problems I've had with Alfven's papers was his focus on quantities such as the total current in a system.  While this quantity is useful for exploring constraints such as the energy budget (matching of energy inflows to outflows), it is otherwise a quantity very difficult to tie back to what an observation or instrument might actually measure such as a flux density, etc. 

Many other Electric Universe 'heroes' developed mathematical models of plasmas as well.  Anthony Peratt's galaxy model, received some examination because it was presented in a form that facilitated mathematical analysis.  The problem is that all the evidence is that Nature didn't see fit to actually build galaxies that way (see "Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background", "Electric Universe: More data refuting the EU galaxy model").

Irving Langmuir, who coined the term 'plasma' also pioneered the mathematical analysis of plasmas and electric discharges in gases.  He was the first to explore the effect of 'space charge' (Wikipedia) in a plasma, where the changing velocities of electrons and ions in an electric field can create regions of net charge density which can have significant effects on the plasma flow.

Considering the number of 'heroes' of the EU supporters were pioneers and strong advocates of mathematical modeling of plasma, EU's denial of plasma modeling could best be described as hypocritical or schizophrenic.

Sunday, August 1, 2010

Darwin & Hitler: the Intelligent Design-Eugenics connection?

Periodically, the “Hitler supported evolution” claim is raised by supporters of Creationism and Intelligent Design (ID). It was used heavily in the ID-supported 'documentary' “Expelled: No Intelligence Allowed” (see "Expelled Exposed"). Recently, the “Exposing Pseudoastronomy” blog had an interesting take with "If Darwin Is Responsible for the Holocaust, Newton Is Responsible for Bombs", making the point that scientific discovery is morally neutral and that knowledge can be used for good or evil. The same studies of atomic and nuclear physics that made modern computers possible also contributed to the development of the atomic bomb. Bottom Line: Blaming science for human abuses of knowledge is a cop-out.

I had always seen this claim justified, not by what Hitler actually wrote or said, but based on someone else's *interpretation* of Hitler's behavior or writing. Considering the level of distortions possible through such third-hand routes, I decided to read the “Mein Kampf” for myself.

First note that I read the Ralph Manheim translation (1998, Houghton Mifflin Co, ISBN: 0-395-92503-7) which is in many bookstores. Hopefully I caught all the typos in my transcription below. I'll give the page numbers so others can confirm my claim (and perhaps check against other translations).

The rest of this post is based largely on a thread I originally posted to the USENET group Talk.Origins, in August 2006.

So what did I discover in this reading? Hint: I didn't find a single mention of Darwin in the nearly 700 pages of Hitler's ramblings.

Hitler believed he was doing God's work:
“Hence today I believe that I am acting in accordance with the will of the Almighty Creator: by defending against the Jew, I am fighting for the work of the Lord.“ [pg 65]
In fact he used many religious comparisons throughout the text:
“Sooner will a camel pass through a needle's eye than than a great man be 'discovered' by an election.“ [pg 88]
“Verily a man cannot serve two masters. And I consider the foundation or destruction of a religion far greater than the foundation or destruction of a state, let alone a party.“ [pg 114]
“Certainly we don't have to discuss these matters with the Jews, the most modern inventors of the cultural perfume. Their whole existence is an embodied protest against the aesthetics of the Lord's image.“ [pg 178]
“Anyone who dares lay hands on the highest image of the Lord commits sacrilege against the benevolent creator of this miracle and contributes to the expulsion from paradise.“ [pg 383]
So regardless of any atheistic inclinations he exhibited after obtaining power, during his rise to power, he knew well invoking religion would increase his support among the populace. How many modern politicians exploit that same trick?

He expressed admiration of Christianity for its fanaticism:
“The greatness of Christianity did not lie in attempted negotiation for compromise with and similar philosophical opinions in the ancient world, but it its inexorable fanaticism in preaching and fighting for its own doctrine.“ [pg 351]
and the adherence to dogma over science:
“Here, too, we can learn by the example of the Catholic Church. Through its doctrinal edifice, and in part quite superfluously, comes into collision with exact science and research, it is none the less unwilling to sacrifice so much as one little syllable of its dogmas. It has recognized quite correctly that its power of resistance does not lie in its lesser or greater adaptation to the scientific findings of the moment, which in reality are always fluctuating, but rather in rigidly holding to dogmas once established, for it is only such dogmas which lend to the whole body of the character of a faith. And so today it stands more firmly than ever. It can be prophesied that in exactly the same measure in which appearance evade us, it will gain more and more blind support as a static pole amid the flight of appearances.“ [pg 459]
“Faith harder to shake than knowledge, love succumbs less to change than respect, hate is more enduring than aversion, and the impetus to the mightiest upheavals on this earth has at all time consisted less in a scientific knowledge dominating the masses than in a fanaticism which inspired them and sometimes in a hysteria which drove them forward.“ [337-338].
He didn't like the notion of being compared to apes (common ancestry with apes is a common complaint in creationist literature):
“A folkish state must therefore begin by raising marriage from the level of a continuous defilement of the race, and give it the consecration of an institution which is called upon to produce images of the Lord and not some monstrosities halfway between man and ape.“ [pg 402]
Here it almost looks like he's describing the Theory of Evolution:
“Nature herself in times of great poverty or bad climatic conditions, as well as poor harvest, intervenes to restrict the increase of population of certain countries or races; this, to be sure, by a method as wise as it is ruthless. She diminishes, not the power of procreation as such, but the conservation of the procreated, by exposing them to hard trials and deprivation with the result that all those who are less strong and less healthy are forced back into the womb of the eternal unknown. those whom she permits to survive the inclemency of existence are a thousandfold tested, hardened, and well adapted to procreate in turn, in order that the process of thoroughgoing selection may begin again from the beginning. By thus brutally proceeding against the individual and immediately calling him back to herself as soon as he shows himself unequal to the storm of life, she keeps the race and species strong, in fact, raises them to the highest accomplishments.“ [pp 131-134]
but then there's this:
“No more than Nature desires the mating of weaker with stronger individuals, even less does she desire the blending of a higher with a lower races, since, if she did, her whole work of higher breeding, over perhaps hundreds of thousands of years, might be ruined with one blow.“ [pg 286]
where he suggests higher breeding is a GOAL of Nature. Isn't that one of the claims of Intelligent Design???

And this is consistent with:
“And in this it must remain aware that we, as guardians of the highest humanity on this earth, are bound by the highest obligation, and the more it strives to bring the German people to racial awareness so that, in addition to breeding dogs, horses, and cats, they will have mercy on their own blood, the more it will be able to meet this obligation.“ [pg 646]
Hitler compares his program of racial purification not to Darwin's natural selection, but to ANIMAL BREEDING or 'controlled selection', a practice which predates Darwin by thousands of years. Such 'controlled selection' was practiced by humans in forms ranging from 'ethnic cleansing' to maintaining 'royal' bloodlines LONG before Darwin. Like other pseudosciences, such racial programs were happy to incorporate modern scientific terminology in an attempt to enhance their credibility (see “Electric Universe: Everything I needed to know about science I learned from watching Star Trek?”). That species can change was known by animal breeders for millennia - Darwin just recognized that the natural environment could also act as a selection mechanism.

One of the key arguments used to support Creationism and Intelligent Design is that Natural Selection 'loses information', or is a 'degenerative' process, a claimed consequence of the Second Law of Thermodynamics. This seems to be the very argument that Hitler uses against the process of 'natural selection' in that it still allows 'unfit' individuals to breed so he clearly advocated controlling breeding based on his criteria of 'fitness'.

The notion of Intelligent Design is that for 'higher' beings to evolve, a 'Designer' must intervene, lest Natural Selection cause the population to 'lose information' and degenerate. How is this different from Hitler's justification of his eugenics (Wikipedia) policies [note the 'defilement' quote from page 402 above]?

From an operational perspective, the only difference between eugenics and Intelligent Design I can see is that eugenics is willing to name the designer (other humans)!  I have been disturbed by the amount of ID rhetoric which seeks to enhance the distinction of (superior) humans and (inferior) non-human species.   How different is this different from the rhetoric of racist groups who equate others to non-humans? 

Could Intelligent Design be a Trojan Horse for eugenics?

So...What Happened?

Wow.  It's been over eight years since I last posted here... When I stepped back in August 2015,...