How to compute constrained realizations

To set up a Commander run, use a Python script such as this one: https://gist.github.com/4065322

Some explanation. First, the usual imports:

from __future__ import division
import numpy as np
import os
import commander as cm

Then, set some Python variables. These are simply reused below and have no meaning by themselves, so we’ll cover them when they are used:

output_filename = 'newmonodip-em7-%d.h5' % os.getpid()
lmax = 500
lprecond = 50
eps = 1e-7
cache_path = 'cache'
seed = None # integer, or None for getting random seed from OS

First we must define our source of data, by constructing cm.SkyObservation objects. Constructing these objects does nothing but basically keep track of a bunch of filenames, in particular, no data is loaded at this stage. To keep things brief we use a Python loop but nothing stops you from unrolling the loop yourself to get Commander 1-style configuration. Again, simply constructing these objects have no value by itself – it’s how they are used later that matters.

Note that columns in FITS tables are referenced by a tuple (fits_filename, extension_no, column), where column can be either an integer or a name. Also note that while in this case we opt for having Commander do the necesarry data processing to get an RMS map you can of course provide an RMS map yourself by passing TT_rms_map.

Environment variables (“$DATA”) and home directories (“~dagss”) are always expanded in any filenames.

observations = []
for name, da_list in [('K', [('K1', '1.437 mK')]),
                      ('Ka', [('Ka1', '1.470 mK')]),
                      ('Q', [('Q1', '2.254 mK'), ('Q2', '2.140 mK')]),
                      ('V', [('V1', '3.319 mK'), ('V2', '2.955 mK')]),
                      ('W', [('W1', '5.906 mK'), ('W2', '6.572 mK'),
                             ('W3', '6.941 mK'), ('W4', '6.778 mK')])]:
    obs = cm.AverageSkyObservations(name, [
        cm.SkyObservation(
            name=da_name,
            description='Raw WMAP r9 data, from http://lambda.gsfc.nasa.gov',
            T_map=('$DATA/wmap/n512/wmap_da_imap_r9_7yr_%s_v4.fits' % name, 1, 'TEMPERATURE'),
            TT_nobs_map=('$DATA/wmap/n512/wmap_da_imap_r9_7yr_%s_v4.fits' % name, 1, 'N_OBS'),
            TT_sigma0=da_sigma0,
            beam_transfer='$DATA/wmap/n512/wmap_%s_ampl_bl_7yr_v4.txt' % name,
            mask_map=('$DATA/wmap/n512/wmap_processing_mask_r9_7yr_v4.fits', 1, 0),
            lmax_beam=lmax)
        for da_name, da_sigma0 in da_list])

Then, model the components and foregrounds. Currently the way to get mixing maps is by loading output from Commander 1; we must then create a dict that maps observations to mixing map descriptors (by which we mean, “FITS-tuples”). Create a function that does that (i.e. maps observations to the channel number that was used in the Commander 1 run):

def get_mixing_maps(comp_num):
    # Create a mapping from observation to Commander 1 output file
    d = {}
    for idx, obs in enumerate(observations):
        fitsfile = '$DATA/wmap/mixing/mixmat_comp%02d_band%02d_k03999.fits' % (comp_num, idx + 1)
        d[obs] = (fitsfile, 1, 0)
    return d

Set up some prior power spectrums for synchrotron and dust, using NumPy arrays. We refuse them to pick up and monopole and dipole to avoid degeneracies in the system:

ls = np.arange(lmax + 1)
ls[0] = ls[1] # avoid NaN
prior_synch = 3e5 * ls**-2.1
prior_dust = 1e3 * ls**-1.8
prior_synch[:2] = 0
prior_dust[:2] = 0

Then set up our model; note that we can pass either a FITS file or a NumPy array as power_spectrum:

cmb = cm.IsotropicGaussianCmbSignal(
    name='cmb',
    power_spectrum='$DATA/wmap/wmap_lcdm_sz_lens_wmap5_cl_v3.fits',
    lmax=lmax
    )

dust = cm.MixingMatrixSignal(
    name='dust',
    lmax=lmax,
    power_spectrum=prior_dust,
    mixing_maps=get_mixing_maps(1)
    )

synchrotron = cm.MixingMatrixSignal(
    name='synch',
    lmax=lmax,
    power_spectrum=prior_synch,
    mixing_maps=get_mixing_maps(2)
    )

monodipole = cm.MonoAndDipoleSignal('monodipole', fix_at=[])

signal_components = [cmb, dust, synchrotron, monodipole]

Again, constructing the ..Signal objects has no meaning by itself, they must be used; the signal_components list is what is really important here.

Finally, construct a CommanderContext, which is responsible for figuring out the parallelization scheme to use and so on:

ctx = cm.CommanderContext(cache_path=cache_path, seed=seed)

The cache_path should be a directory where results that are likely to be reused in other runs will be cached. The seed is the seed for the random number generator which you can leave as None if you want a new one each time.

Finally:

realizations = cm.app.constrained_realization(
    ctx,
    output_filename,
    observations,
    signal_components,
    wiener_only=False, # want samples
    lprecond=lprecond,
    eps=eps,
    filemode='w')

This will do the actual work. Note that we pass in the lists of observations and signal_components, if we didn’t, none of the above would do anything!

The resulting HDF file contains some information about what went into the run etc., plus the results:

  • /power/<comp_name>: The power spectrum of the component
  • /samples/<comp_name>: The coefficients in \(m\)-major ordering

The latter can be dealt with by using routines in commander.sphere.mmajor, or by using the command-line tool to do an alm2map and dump to FITS file:

$ cmdr dumpmap --nside=512 myresults.h5 /samples/cmb 0 cmb.fits
$ cmdr dumpmap --nside=512 myresults.h5 /samples/synch 0 synch.fits

Previous topic

The Commander User’s Guide

Next topic

Joint Bayesian component separation and model estimation

This Page