The software architecture and automation of Robo-AO was designed and led by Reed Rid- dle (Riddle et al., 2012). It currently consists of more than 120,000 lines of documented code. The entire software is written in C++ over a Linux Fedora 13 operating system (non- real time) for easy portability. Some of the housekeeping tasks (compressing and archiving data, telemetry) are performed with custom bash scripts.
The overarching goal behind the Robo-AO software was to ensure easy portability to different operating setups or hardware (if another group decides to, say, use a different deformable mirror or a different camera) and easy reconfiguration for retuning and observing in different modes. To achieve this goal, the Robo-AO software consists of many different daemons (modules), each of which performs an independent task. The daemons interact through TCP/IP sockets which allow for the daemons to be run on different machines, although the entire computational load of Robo-AO is easily borne by a single quad-CPU processor machine. Each daemon uses a bash style configuration file that allows the user to change almost every possible variable for the daemon.
The overall architecture of the system is shown in Figure 2.25. The robod daemon is the main supervisor module that orchestrates the observing program throughout the night.
It commands the sub-system daemons (tcsd; telescope control, aosysd; AO system, lgsd;
laser guide star etc) to obtain targets, point the telescope, setup cameras, lock AO control loops, and to gather science data. The subsystem daemons are responsible for the minutiae of the operation and only require higher level commands (e.g. START AO LOOP) to perform all the steps required to fulfill the command. The subsystem daemons then return a status or an error message to inform the supervisor daemon of the result.
Hardware interaction is achieved through a wrapper layer between the device drivers and the daemons. This allows the daemons to maintain a standard interface while the
wrapper isolates the hardware specific commands. Thus, in order to use a different camera or actuator unit, the only section of the code that needs to be replaced is the wrapper layer.
Each daemon has multiple threads running in parallel, including error, status, and logging threads. The error thread is responsible for detecting and attempting to correct error conditions. Setting up the error conditions and the error handling functions is the biggest task for the automation of the system.
The status thread monitors the status of the daemon (initialized, running, stopped etc) and reports to the supervising daemons whenever the status is requested. The logging thread logs the command, warning, and error messages that were sent to or sent from each daemon into a log file.
Status Telemetry
Logging
Data AO
LGS ADC VIC IRC
TCS
Watchdog System Monitor
Robo
Telescope
Status Weather
Figure 2. The automation software architecture. Blue boxes are the hardware control subsystem daemons, gray boxes are control or oversight daemons, and red boxes are data file storage. Red lines with arrows show the paths for telemetry through the operating system, black the command paths, and blue the data paths.
Each of the subsystems is composed of many separate functions that initialize the hardware, monitor its function and manage the operation of the hardware to achieve successful scientific output. As an example, Figure 1 shows a structural layout of the AO subsystem daemon. Two threads independently control hardware operations as well as measurements based on the output of the two CCD cameras. A status thread monitors the variables and hardware output to ensure that the system is functioning correctly. A command thread accepts commands over the TCP/IP interface, executes the commands, and returns the output of the commands to the calling subsystem. The error thread captures and corrects error states, up to and including restarting the entire subsystem hardware if a serious enough error is detected.
In essence, each of the subsystem daemons are individual robotic programs that manage their hardware and operate according to external commands.
3. THE ROBO-AO AUTOMATION SYSTEM
Figure 2 shows the overall architecture of the entire Robo-AO automated control system. The subsystem daemons communicate their state through the TCP/IP protocol to a system monitoring service, which is used by the robotic system to control the subsystems and correct for errors. The robotic system schedules observations and operates the instrumentation to gather the data, and a watchdog process monitors the system status and robotic system in case of errors that the robotic system misses or cannot handle.
Proc. of SPIE Vol. 8447 84472O-4 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 07/26/2013 Terms of Use: http://spiedl.org/terms
Figure 2.25. The software architecture of Robo-AO. Each hardware device is controlled by a sub- system daemon. Figure 2 fromRiddle et al.(2012)
2.3.1 AO Control System
My first assignment on joining the Robo-AO project was to complete a literature survey on AO control systems and create the AO correction algorithm to be implemented in Robo- AO. Appendix C (adapted from an internal report written in April 2010) includes all the details of the implementation of the AO control system.
WFS
Reconstruc.on Matrix
Calculate centroids
TT Cam
Amplifier
LGS TTM TTM Electronics
Science TTM TTM Electronics
DM DM Electronics
TT Cam Image
Lineariza.on
Slope offsets Slope Lineariza.on Tab.
(min_slope, max_slope)
Slope Vector
WF Error
Control Law
Calculate TT centroid
DM Command WFS Image
Measured Centroids
Slope Vector
LUT
+
*
DM Command Buffer
Figure 2.26. The algorithm used for processing input from the WFS sensor in LGS mode operation.
Here, we shall briefly discuss the AO control loop, as shown in Figure 2.26. The AO control loop begins with a single wavefront sensor image. For each of the 97 subapertures in each image, the total intensity of the laser spot is calculated. If the intensity is above a certain threshold, the centroid of the Shack-Hartmann spot within the subaperture (i.e. x and y slope of the local wavefront) is calculated, else the subaperture is flagged as having low light and the centroid position is not calculated. The measured slopes are linearized through a look-up table which accounts for the response of the quad-cell. The fiducial centroid positions (or slope offsets) are subtracted from the measured values, and the final values are arranged into a slope vector with 97×2 = 194 values. The measured slopes are multiplied with a reconstructor matrix, which creates a least-squares estimate of the wavefront shape as projected on the 120 deformable mirror actuators and the laser uplink tip-tilt estimate. The estimate of the wavefront error is applied to the deformable mirror through an integral control law with a small leak term that allows the deformable mirror to gracefully tend to a flat position if no slopes can be measured (in low light conditions).
The AO control loop has been demonstrated to run as fast as 1.5 kHz. However, the best operating rate to balance our error sources is 1.2 kHz, which is now the standard rate.
At 1.2 kHz, the wavefront sensor integrates 8 − 9 pulses from the laser beacon (operating at 10 kHz) before the frame is read out and processed. As we will discuss in Chapter3, this bandwidth is sufficient to account for most of the higher order wavefront errors.