We’re at the stage of exploring the Evryscope dataset for science. The system hardware is working perfectly (and has been for over a year), and has weathered snowstorms, a direct lightning strike and a magnitude-8.5 earthquake. On this page we’ve detailed the status of the next steps, including the system image quality, the pipeline, generated light curves, and the ability to go to deeper observations with the system (last updated Feb. 2017).
Evryscope images: We’ve been working hard to optimize our image quality over the last year, including building some of the first fully-robotic alignment systems for dozens camera lenses. We are routinely aligning our lenses with few-micron-level repeatability across our entire field (see Ratzloff et al. 2015), and have developed a new tip/tilt/focus alignment method that rapidly measures and optimizes our image quality over our entire focal plane (we will shortly publish that algorithm). We are now routinely exceeding our original image quality goals in almost every camera, pushing our limiting magnitude a little fainter than expected (there are a few stickler lenses which only just meet our original image quality requirements; they will be replaced soon). Here’s an example (about 1% of an individual exposure):
Our saturation limit is around 9th magnitude (although our anti-blooming chips mean brighter stars are usable for photometry at reduced performance). Our limiting magnitude is g = 15.5 – 16.5 depending on moon conditions, although we automatically do several-hour exposures each night which go much deeper (see below).
Pipeline: The Evryscope pipeline processes 360GB of imaging data to detect, measure and insert approximately 600 million object detections each night. The resulting 10s-of-TB database is queryable rapidly enough to extract light curves for tens of thousands of targets. The data volume excludes the possibility of shipping the data to UNC for processing, and so all processing is done in a high-speed server in the telescope dome. After a year of work, we’re happy to say the pipeline is working well and we’re moving to science operations.
Light curves: we’re currently generating long-term light curves from the system. This is complex because of the sheer data volume involved — a typical query into our database for a target-star light curve and nearby companion stars can produce several gigabytes.
The photometric performance is looking very good — the detrending algorithms (based on both Trend Filtering Analysis and SysREM) are performing very well, and we are routinely reaching scintillation-limited performance. Here are a couple of variable stars to show the current performance for typical “middling-bright” targets.
Here is the photometric performance over the whole sky over several month timescales, along with the requirements for some of our science programs. This covers all cloud and moon conditions encountered during the observations. With more careful attention to using only the best-quality, darkest nights and uncrowded fields we usually attain 6-7 mmag precision at the bright end.
Below are results from a query of our lightcurves database for a random set of 400 eclipsing binaries from VSX (this is a truly-random query, with no selection for good-looking lightcurves, low systematic noise, etc.). We’re routinely getting percent-precision light curves for almost every bright eclipsing binary in the Southern sky.
Coadding: Because we’re observing the entire sky all the time, we automatically do long exposures of the entire sky each night. Our images are completely sky background limited, and so we can co-add to achieve depth without performance loss from readout noise. So far we’ve demonstrated that summing over an hour over our entire gigapixel images gives the expected depth improvement — here’s an example from one night. We’ve scaled the images to clearly show the full PSFs, background noise profiles, bad pixels etc.
We expect that co-adding will allow us to routinely cover the entire sky to g=17-18 magnitude at least every few hours.