A High-Throughput Feature-Based Sorting System for Artificial Selection in Drosophila

As part of a project funded by the Swiss National Foundation in collaboration with the Institute of Molecular Systems Biology of ETH Zurich, Scitracks developed the FlyCatwalk, a fully automated, high-throughput system to sort live fruit flies (Drosophila melanogaster) based on morphometric traits.

The FlyCatwalk framework and workflow. (A) Setup: 1, tunnel entrance; 2, singling out gate; 3, camera; 4, measurement channel; 5, sorting gate; 6, pneumatic valves; 7, storage device; and 8, XZ robot. (B) Flies in the entrance chamber are regularly activated by air pulses to encourage movement toward the measurement chamber. Once a fly enters the measurement chamber, a gate closes and prevents further flies from entering. While the fly is walking along the tunnel of the chamber, it is imaged with a high-resolution camera and valid images are kept. On reaching the end of the tunnel, the fly is either blown back to the entrance chamber, if too few images are judged valid for analysis, or transferred into a well of the storage device, which then moves one slot further. All images corresponding to stored flies are saved to a disk for in-depth morphometric analysis. There is a time limit for how long a fly may take to cross the channel. If this is exceeded without having acquired sufficient valid images, then the fly is directly blown back to the entrance chamber. The whole process is repeated until all wells of the storage device are filled with flies.

The FlyCatwalk can detect gender and quantify body and wing morphology parameters at a four-old higher throughput compared with manual processing. The phenotyping results acquired using the FlyCatwalk correlate well with those obtained using the standard manual procedure. We demonstrate that an automated, high-throughput, feature-based sorting system is able to avoid previous limitations in population size and replicate numbers. Our approach can likewise be applied for a variety of traits and experimental settings that require high-throughput phenotyping.


Morphometric analysis. (A–E) sex detection. Sex is determined using two methods. First, the luminance (A, E) is evaluated along the abdomens’ longitudinal axis (B, F) and compared with templates of male and female abdominal luminance to determine which of the two templates gives the better correlation. Second, from the entire video sequence, the most anterior leg pair (C, G) is identified and scanned for the existence of sex combs (D, H), which are only present in males. (I–L) Detection and quantification of interocular distance. The position of the head is determined from the body segmentation (Figure S1). The position of the ocelli is extracted using a template-matching algorithm (J, template; K, result of the template matching algorithm). The luminance (black line in L) is evaluated along the line intersecting the two posterior ocelli (white line in I) and its derivative (red line in L) is computed to identify regions of high contrast (dashed lines in L) corresponding to the border of the compound eyes. (M, N) Wing fitting and quantification. The raw image is filtered and binarized, and subsequently the skeleton extraction algorithm is applied to it to detect vein structures (M). A B-spline is fit to the wing to determine outline and veins L2-L5 (N). Yellow solid lines: B-spline fit of the wing outline and veins L2-L5. Red dashed line: segment used to define wing length. Green dashed line: segment used to define wing width.