To make the numerical predictions, modern scientists usually rely on computers to do the repeated mathematical calculations (sometimes called 'number crunching') that these types of predictions require. Once the programs are written, the computer can perform them tirelessly.
One tool in many researchers toolbox is some type of particle simulation, used to study interactions and motions of many particles. Codes written for an arbitrary number of particles, N, are usually called N-body codes.
I first wrote my own N-body code in 1979 in AppleSoft BASIC on an Apple II computer (wikipedia), in my early college years (the FIRST time I tried to complete my degree). Back then, computational libraries for solving differential equations existed only for large scientific systems. On smaller computers, you had to write your own, but plenty of documentation of the techniques was available. One of the most popular techniques for solving these types of differential equations are the Runge-Kutta integrators (wikipedia).
Today, computer hardware is readily available, as are numerical processing libraries. Numerical N-body solvers are simple enough that a competent programmer in high school could write their own.
So for this project, I've updated my N-body code. I've written the new version in Python (v2.7) and using the Python numerical libraries: numpy, scipy, matplotlib, and others (for more info on the installation). I've also written an interface to generate rendered output using the POVray rendering package.
N-body Test Runs
In testing these types of codes, we usually perform runs on simple configurations where the solution is available in a known form. Here's the results from a 2-body gravitational run, the Kepler Problem (wikipedia). The program allows me to generate tables of particle positions once, after which I can read these tables and construct different types of plots. Here I plot the results of the run in reference frame in which the velocities and positions are originally defined.
Note that for the two objects, there is a point that lies on a line between the two particles that represents the center-of-mass (wikipedia), or barycenter, of the two particles. In this configuration, the center-of-mass moves in a straight line and at a uniform speed. This is a property of the center-of-mass that can be mathematically proven for a system of particles interacting by a central force (where the force acts along the line between the particles).
Now let's define some of our mathematical terms:
x(A),y(A), z(A) = x, y, z coordinates of object A in the original coordinate system
x(CM), y(CM), z(CM) = x, y, z coordinates of center-of-mass in original coordinate system
Next we can plot the exact same data points, but this time we re-compute their positions in a reference frame where the center-of-mass is not moving. We do this by computing the center-of-mass of the two particles at each time step (x(CM), y(CM), z(CM)) of the simulation, and then compute the position of the particles relative to that center-of-mass:
x(A,CM), y(A,CM), z(A,CM) = coordinates of object A in the CM coordinate system, where the center-of-mass is translated to the origin, (0,0,0)
x(A,CM) = x(A) - x(CM)
y(A,CM) = y(A) - x(CM)
For simplicity in plotting, I've restricted the simulation run to motion in 2-dimensions, x and y, with z=0. For this type of problem, it can also be mathematically proven that if the particles start with all positions and velocities in the z-direction equal to zero, they will remain zero at future times.
Now we can plot the particle trajectories in the CM reference frame, which generates the following plot.
Now the two particles revolve around their mutual center-of-mass (position 0,0 in the plot), always keeping the center-of-mass on the line between them. It can similarly be mathematically proven that in a system of particles interacting by central forces, the motion can always be separated into the motion of the particles moving around the center-of-mass, and the motion of the entire system along the center of mass.
Finally, we complete the Kepler 2-body problem by plotting the two particles in a coordinate system where the larger body represents the origin of the coordinate system. This is the coordinate system of the original Kepler problem, and the coordinate system where Kepler's Laws (wikipedia) are defined. To do this, we again use the exact same set of points generated in the simulation, and subtract the position of the larger body at each time step. So
x(A,ref), y(A,ref), z(A,ref) = position of object A, in the coordinate system where the origin is object ref.
x(A,ref) = x(A) - x(A) = 0.0
y(A,ref) = y(A) - y(A) = 0.0
Object A resides at the origin in this coordinate system.
x(B,ref) = x(B) - x(A)
y(B,ref) = y(B) - y(A)
Plotting these results for each point in the simulation generates the movie
We also plot the position of the center-of-mass in this new coordinate system, which appears to orbit the main body.
For this simulation, I've chosen 1 AU as the unit of distance, and 1 year as the unit of time. The large body is 10 solar masses and the smaller is 1 solar mass. I've chosen the starting positions and velocities so that the orbits are sufficiently elliptical that you can see motions relative to the center-of-mass. Competent scientists could use the information in these plots to estimate such additional parameters as speeds at apoapsis and periapsis. Do EU 'theorists' know how to determine this information from the simulation?
To illustrate the flexibility of these types of programs, I can also generate simulations of electromagnetic systems. Here I have a plot of some electrons and nuclei (hydrogen, deuterium, tritium, helium-3, helium-4) in a configuration where an external electric (yellow arrow) and magnetic field (blue arrow) are at right angles to each other.
This visualization was generated using the POVray renderer. Currently, electromagnetic interactions between particles are not included in the simulation run, but will be added in the future.
Note that I have placed Creative Commons notices (BY-NC) on these movies for a reason (see Setterfield Again...). Any website which hosts these movies must include a proper credit to me and a link back to this page.
Coming on Deck...
These are some opening samples of what can be done. But this is just the tip of the iceberg of what can be done with this tool.
Future topics for this thread:
- An introduction to N-Body simulations
- The physics and mathematics of N-Body simulations
- Some features of my particular implementation
- How do we test/validate the simulations?
- Lagrange Points
- Lagrange Points with an extra planet
- Three-body systems such as Sun-Earth-Moon
- Three-body systems such as Sun-Earth-Jupiter
- Solar system model (Sun + eight planets)
- Various particle and electromagnetic field configurations
- The “Tatooine system“, Kepler 16AB (NASA, Exoplanet Catalog)
- Lorentz Slingshot (see 365 days of astronomy podcast)
- Velikovsky's Venus (what would happen to a planet launched out of Jupiter?)
- The Newton-Coulomb problem, or what if the Sun and planets had significant electric charge?
- Particle-in-Cell (PIC) support to accommodate more realistic plasma models (wikipedia). This moves the code in a direction suitable to run a version of Peratt's galaxy simulations. Will I be generating testable models while Siggy_G is still generating excuses?
And Still the Pseudo-Scientists Can't Meet the Standards
One of the popular whines from pseudo-scientists is that they could produce worthy results if they had the same funding as the mainstream scientists.
Gee, I'm sure a lot of practicing scientists would like to have that kind of funding as well, in the rare cases where it exists.
Most scientists are not paid to do research full time. I certainly am not. Often research is a part-time project, done between teaching classes or doing support work (writing analysis software, archiving datasets, visualization production) for other science projects.
Note that I did not have any grant to work on this project. I didn't have to hire an army, or even one programmer, beyond myself. Beyond the computer and operating system, I used open-source software available to anyone. This was done on commercial grade computer systems (laptop & desktop), with the laptop used primarily for code development and testing, and the desktop system used for running the more heavy-duty processing of full simulations.
The code was developed in my spare time (though some components of the development have already found application in my day job).