next up previous
Next: Conclusions Up: Evaluation Previous: Indexing reduction

Results

Table 2 shows the results for every run submitted. The first column indicates the name of each run. The last column shows the Mean Reciprocal Rank (MRR) obtained for each run. Additionally, the average success at different number of documents retrieved is shown; by instance, the second column indicates the average success of the CLIRS at the first answer. The ``TP20'' approach, obtained fewer than 50 results, and therefore, it average success at 50 was not calculated.

As can be seen, an important improvement was gained by using an evaluation corpus obtained with a neighbourhood of 40% of TP. We were hoping to obtain comparable results with the ``Full'' run, but as can be seen, the ``TP40'' run received double the score of the ``Full'' run when evaluated using MRR.


Table 2: Evaluation results
  Average Success at  
Corpus 1 5 10 20 50 Mean Reciprocal Rank
Full 0.0224 0.0672 0.1119 0.1418 0.1866 0.0465
TP10 0.0224 0.0373 0.0672 0.0821 0.1119 0.0331
TP20 0.0299 0.0448 0.0672 0.1045 - 0.0446
TP40 0.0597 0.0970 0.1119 0.1418 0.2164 0.0844
TP60 0.0522 0.1045 0.1269 0.1642 0.2090 0.0771

Three teams participated at the bilingual ``English to Spanish'' subtask at WebCLEF. Every team submitted at least one run [1,11,8]. A comparison among the results obtained by each team can be seen in Table 3. Our second place in this contest can be dramatically improved by applying a better translation process and by using a better representation model for our information retrieval system.


Table 3: All teams results
Team Average Success at Mean Reciprocal Rank
Name 1 5 10 20 50 over 134 Topics
UNED 0.0821 0.1045 0.1194 0.1343 0.2090 0.0930
BUAP/UPV 0.0597 0.0970 0.1119 0.1418 0.2164 0.0844
ALICANTE 0.0299 0.0522 0.0597 0.0746 0.0970 0.0395


next up previous
Next: Conclusions Up: Evaluation Previous: Indexing reduction
David Pinto 2006-05-25