As can be seen, an important improvement was gained by using an evaluation corpus obtained with a neighbourhood of 40% of TP. We were hoping to obtain comparable results with the ``Full'' run, but as can be seen, the ``TP40'' run received double the score of the ``Full'' run when evaluated using MRR.
Average Success at | ||||||
Corpus | 1 | 5 | 10 | 20 | 50 | Mean Reciprocal Rank |
Full | 0.0224 | 0.0672 | 0.1119 | 0.1418 | 0.1866 | 0.0465 |
TP10 | 0.0224 | 0.0373 | 0.0672 | 0.0821 | 0.1119 | 0.0331 |
TP20 | 0.0299 | 0.0448 | 0.0672 | 0.1045 | - | 0.0446 |
TP40 | 0.0597 | 0.0970 | 0.1119 | 0.1418 | 0.2164 | 0.0844 |
TP60 | 0.0522 | 0.1045 | 0.1269 | 0.1642 | 0.2090 | 0.0771 |
Three teams participated at the bilingual ``English to Spanish'' subtask at WebCLEF. Every team submitted at least one run [1,11,8]. A comparison among the results obtained by each team can be seen in Table 3. Our second place in this contest can be dramatically improved by applying a better translation process and by using a better representation model for our information retrieval system.
Team | Average Success at | Mean Reciprocal Rank | ||||
Name | 1 | 5 | 10 | 20 | 50 | over 134 Topics |
UNED | 0.0821 | 0.1045 | 0.1194 | 0.1343 | 0.2090 | 0.0930 |
BUAP/UPV | 0.0597 | 0.0970 | 0.1119 | 0.1418 | 0.2164 | 0.0844 |
ALICANTE | 0.0299 | 0.0522 | 0.0597 | 0.0746 | 0.0970 | 0.0395 |