Thursday, April 12, 2007

Bandwidth Smearing

Once your radio data is calibrated you can SPLIT off the source and apply these calibrations. During SPLIT you'll want to average together as many channels as possible. The number of channels you can average together will be limited by bandwidth smearing which causes sources to smear out in the radial direction. Bandwidth smearing becomes significant when the product of (\frac{\delta \nu}{\nu})*(# of beams from the image center) reaches unity where \delta \nu is your channel size and \nu is your total bandwidth. So, if you have a really small beam you won't be able to average together very many channels during SPLIT if you want to avoid smearing sources that are far from the center of your field. For VLA data the correlator doesn't give you very many channels so this doesn't matter as much as with GMRT data which always gives 128 channels. I find that with 610 MHz GMRT data I can only average every 2 channels when I SPLIT off the source if I want to avoid radial smearing out to the edge of my primary beam (HPBW of 0.7 degrees). Your source dataset will most likely have greater than one channel when you're ready to run IMAGR but you only want one image in the end so you'll need to set NCHAV and CHINC to be the total number of channels in your dataset. IMAGR is smart and it knows how to combine these channels during the imaging process so that bandwidth smearing isn't an issue. (So, it's actually not imperative that you average any channels together when you SPLIT off your source data but it does help IMAGR to run faster if you do.)


amanda said...

Sort of off topic, I'm not entirely sure that AIPS combines the different IFs optimally for my data. I seem to get much getter results in Miriad.

emily said...

oh right, and what does miriad call it? multi-frequency imaging? we should track down why AIPS doesn't do it as well.

Anonymous said...

Hi there,

Off topic, & quite possibly a stupid question from an AIPS newbie, but I'd really appreciate any help. I'm currently processing some GMRT data (30mins, 610MHz, 38 facets, each imsize 512, cellsize 1.5), & IMAGR seems to take a very long time to run, ~2hrs for 1000 iterations. I'm wondering if this is normal, or if something has gone wrong during the SPLAT process, where I split off the source from the multi source dataset, and averaged every 7 channels between channels 1 & 105.

IMAGR, with NCHAV=15,when running, appears to process channels 1-15, then 2-16,3-17 etc, is this normal for data that has been averaged as above?