2017 Competition Results
The results of the 2017 competition have now been published as a paper collection. The main results are described in the central paper and the details are available in papers by the participants in the collection. The main score comparisons for the primary tasks are listed below.
Task 1: Crown Delineation
Table 1. Scoring table. Participants have been ranked by the pairwise Jaccard Coefficient.
| Team | Pairwise Jaccard Coefficient |
|---|---|
| FEM | Best Performance |
Congratulations to FEM group! They showed that the method 4 of Dalponte et al. 2015 (itcIMG of the itcSegment R package) was the best method out of the three applied to the dataset (Table 1).
Task 2: Crown Alignment
Table 2. Scoring table. Participants have been ranked by the trace of the prediction matrix, divided by the sum over the values in that matrix.
| Team | Score |
|---|---|
| FEM | Best Performance |
Congratulations to FEM group! They showed that the Euclidean distance between ground points and the ITCs, was the best method out of the two applied to the dataset (Table 2). The distance was calculated on four axes: X coordinate, Y coordinate, the height and the crown radius.
Task 3: Species Classification
Table 3. Scoring table. Participants have been ranked by the cross entropy cost. Rank-1 Accuracy provided in the fourth column.
| Team | Cross Entropy Cost | Rank-1 Accuracy |
|---|---|---|
| StanfordCCB | Best Performance | - |
Congratulations to StanfordCCB group! They showed that a three step process was the best method applied to the data: (1) dimensionality reduction; (2) multi-label classification algorithms, and; (3) ensembles (Table 3).
Paper Collection
The full results and methodological details are available in the published paper collection at PeerJ Collections.