A question from Sam Tun....
I've been working on some self-cal programs for the Owens Valley Solar Array, and I am having similar problems as these. I would want a self-caled map that gives me a good approximation to my CLEAN map fluxes, but no go on that. So, my question is, does anyone know what is done to the clean components to get them to return appropriate visibilities to feed into self-cal? I believe that in AIPS you feed the CLEAN map as a model into calib, but does anyone know the details of what goes on in there (or where to find them)? Gracias.
Tuesday, July 22, 2008
How does CALIB make use of clean components?
Posted by Laura at 10:22 AM 1 comments
Labels: self calibration
Total Flux in an Image Region?
A question from an anonymous reader...
But I have this issue: How to find the total flux in a given region? tvstat can make regions, but it shows only mean and rms. Is there any task that will show total flux under a selection?
Posted by Laura at 10:14 AM 2 comments
Labels: image analysis
Wednesday, July 16, 2008
How does DBCON deal with flagging?
Does anybody out there understand how DBCON flags your data? The help file is being less than transparent. DBCON seems to copy over all the FG tables from the first image you give it to the DBCON'd file, but does not appear to copy FG tables from the second image.
There is this note in the help file:
"Also, any CL, FG, TY, WX, IM, MC, PC, AT, CT, OB, or GC tables with version=1 will have their source numbers translated and appended to the end of the corresponding table (if any) from the first file."
So, let's say I have 13 flag tables for Image1 and 6 flag tables for Image2 (which happens a lot with the new crazy FG table creation scheme). Does this mean FG #1 for Image2 gets appended to FG tables 1-13 from Image1? And then in the futere, I will be apply a combination of FG#1 to Image1 and FG #13 to Image2? That's kinda dumb...
Posted by Laura at 8:22 PM 4 comments
Labels: concatenating uv data, flagging
Wednesday, July 9, 2008
How to Know When You Can Self-Cal and What Solint To Use
Have you ever done a self-calibration run to find out the self cal is actually making your images worse, not better? Have you ever guessed at what SOLINT to use while self calibrating? I know I have!
People always say you should evaluate the signal-to-noise of your data before self-calibrating, but I never understood what this meant until today! There is a simple equation to find out if you can self-cal and if the SOLINT you are considering might be too short....
First image your data and clean it pretty good. Afterwards you can look in the image header and not the total cleaned flux. This is your 'Signal'.
Second of all, you want to calculate the noise in your data, per baseline per SOLINT. First, measure the rms noise in your image, in Jy/beam (sig_image). Next, calculate the number of baselines in your data (N_base where N_base = ((N_ant * N_ant-1) / 2) and N_ant is the number of operational antennae). Finally, figure out how much time-on-source went into making your image, in minutes (TOS). The noise of your data per baseline for a given SOLINT (in minutes) is then:
Noise = sig_image * sqrt(N_base) * sqrt(TOS/SOLINT)
Now, compare the 'Signal' with the 'Noise'. For 'P' self cal, you want the Signal to be at least 5 times greater than the Noise. If it's not, then increase your SOLINT. For A&P self cal, you probably want a signal-to-noise of 10-20, at least.
Note: If you are doing multi-facet imaging (at lower frequencies), you want to use the total flux in your data-- that is the sum from all facets, The image headers tell you this as 'CCTOTAL'.
Posted by Laura at 4:51 PM 0 comments
Labels: self calibration
Self Calibration and DBCON
Let's say you have two data sets that you want to DBCON together. Should you self calibrate them individually? or after DBCONing?
The tip I got is you should self-cal each data set individually as best you can, then DBCON. However, after DBCONing, there might be some small gain offsets between the two data sets. Do a final A&P self cal on the DBCONed data set with a really long SOLINT, so that you basically have one SN solution per data set. This will ensure the original data sets are as consistent with one another as possible.
Posted by Laura at 4:41 PM 0 comments
Labels: concatenating uv data, self calibration
Two More Ways to Identify Bad Data
Crystal also said a good way to identify RFI is to look at Stokes V, either in TVFLG or UVPLT or whatever. RFI is usually polarized and will pop out as unusually high baselines/channels/times.
Stokes V will usually mimic your amplitude structure (as a function of baseline length). So, for example, if your source has high amplitudes at short baselines, Stokes V should also be a bit higher on these short baselines (so don't get confused and flag them).
And another way to find that bad data-- Data weights. Plot your data in UVPLT with BPARM = 0 13. The weights for all the data should cluster-- if there are any points that are anomalously low or high, flag them! Crystal says she likes to use WIPER. Obviously, be more concerned about data with really high weights that really low weights-- because this data will affect your overall data set more, as it is up-weighted!
Posted by Laura at 4:30 PM 1 comments
Don't Clip Your Data!
I'm at the GBT, and just had a nice long chat with Crystal Brogan (who was very helpful when she reeally didn't need to be). I'm gonna list a few tips she suggests.
Here's one: DON'T CLIP!! She said that if you clip your data based on amplitude (or phase), what you're really doing is masking the lower-level bad data. A baseline is probably all bad if it has quite a few high points, and you don't want to just flag the really high stuff, you want to flag it all. If you clip the high stuff, it will be very hard to ever identify that whole baseline as bad. You'll essentially be 'losing' bad data. I know CLIPing is sometimes tempting, especially at the GMRT, but I can see that it's an especially bad idea in this case.
Posted by Laura at 4:22 PM 1 comments
Labels: flagging