|Click to enlarge|
The purpose of these exercises was to develop an understanding of how this type of distribution can form. We can now examine the behavior of this histogram with new insights.
For z<0.08, the number of galaxies we see is dominated by the fact that we are counting in a steadily larger and larger sphere. Each point in the histogram is the number of galaxies in a thin spherical shell of thickness delta_z, and radius z, which is increasing as z^2 as we observe deeper into space. However, beyond about z=0.08, we start missing the faintest galaxies because they are fainter than our telescopes can detect, and the counts begin to drop, relative to the number we expect to see. This happens in spite of the fact that we are still seeing a larger and larger volume of space. In a perfectly uniform (and infinite) universe, an observer at any location would see a similar histogram of galaxy counts in distance space (and z-space if they are correlated).
If the graphic looks at little familiar, it might be because it is very similar to the histograms of the galaxy distributions that come from deep sky galaxy surveys and appear in a number of papers.
For comparison, we can plot our model together with a couple of the more well-known deep survey datasets. Because my model universe was assembled using very rough numbers, my histogram curve is a little off, but I can adjust the parameters to generate a better match (or 'fit') with the data. We plot this type of curve along with the profiles of the 2dFGRS (blue) and SDSS DR8 (red) surveys. I've normalized the galaxy count on all the samples to make a more reliable comparison. While not an exact match, the general profile illustrates some of the mechanisms that drive the large-scale shape.
|Click to enlarge|
We can smooth the fluctuations by choosing larger bins for the data, for example, for bins about 20x larger, or delta_z = 0.02, we get
|Click to enlarge|
An alternative model might include is that we are seeing the edge of the distribution and that the number of galaxies is actually dropping to zero because we are seeing back to a time when galaxies were just being formed and the density of galaxies was lower.
Note this has spherical (isotropic) coverage from observer position. Most surveys cover only a fraction of the sky at best, and deviations from this mean curve could be due to large structures dominating one direction which are not spherically distributed around the observer.
There are a number of ideas I'm exploring for additional posts in this series, but I want to take some time to check them carefully.
- Power Spectra - 1-D vs 3-D data.
Currently, my computations of 3-D power spectra of my mock catalogs are notoriously slow. Catalogs as small as 50,000 galaxies can take several days to process into a power spectrum at reasonable resolution. This has made attempts at code validation difficult. I'm looking at multiprocessing and other techniques to improve this. The big surveys use various complex mathematical techniques to make the computation more manageable - but I will try to avoid this, both for clarity and because the techniques would require substantial coding and debugging and explanation time (note that the FFT is faster than the standard Fourier transform due to a mathematical trick). However, there are numerous concepts that can be addressed using the mathematical theorems of the Fourier transform and power spectra in the meantime.
- Slicing the survey data.
Because all these datasets are publicly available, I have samples of the nearly quarter million galaxies in 2d FGRS and over 800,000 galaxies from SDSS DR8. These two surveys provide some really nice data for visualization. For example, if galaxies were really concentrated in concentric spheres around the Milky Way Galaxy, this should be easy to illustrate by selecting subsets of the surveys in any given direction (choosing sections of the sky of equal angular area).
- Improving the mock catalog.
There are many effects of real physics that are not included in my simulation, which are integrated into professional mock catalogs (see Creating synthetic universes in a computer by Carlton Baugh). Some of the effects I have not included are:
- The redshift, z, is not a linear function of distance, r, over the full range
- Galaxies interact under gravity, altering their motion with respect to the Hubble flow. This is sometimes called 'clumping'. The gravitational attraction of the galaxies gives them motions not perfectly carried in the Hubble flow, creating a situation where galaxies at a given distance, r, will have a redshift, z, different than indicated by the Hubble relationship. This is also responsible for the "Finger of God" artifact (wikipedia). (see also Redshift-space Distortions, Redshift-distortions)
- Galaxy evolution. As the stars in a galaxy evolve, the luminosity and spectra of the galaxy will change.
- There is an luminosity correction for the galaxy due to how the redshift alters the amount of light in the visible part of the spectrum - photons from the red end of the spectrum get shifted into the infrared and photons from the ultraviolet end of the spectrum are shifted into the violet. This is also called the k-correction (wikipedia).