Result Assessment Tool (RAT)

RAT Logo

The project's overall goal is to continue to develop and enhance the Result Assessment Tool (RAT) so that it will become a stable, flexible and sustainable tool for conducting studies that deal with the collection and analysis of data from search engines and other information retrieval systems. The Result Assessment Tool (RAT) is a software toolkit that allows researchers to conduct large-scale studies based on results from (commercial) search engines and other information retrieval systems. It consists of modules for (1) designing studies, (2) collecting data from search systems, (3) collecting judgments on the results, (4) downloading/analysing the results.


Due to the modularity, individual components can be used for a multitude of studies relating to web content, such as qualitative content analyses. Through automated scraping, web documents can be analysed quantitatively in empirical studies.


We will ensure the sustainability of the project results through measures in three areas: (1) Establishing and distributing the software, (2) establishing and maintaining a user and developer community, (3) publishing the software open source.


The modular web-based software can automatically record data from search engines. Studies with questions and scales can be flexibly designed, and the information objects can be evaluated by jurors on the basis of the questions.

A starting point to developing RAT was the fact that retrieval effectiveness studies usually require much manual work in designing the test, collecting search results, finding jurors and collecting their assessments, and in analysing the test results, as well. RAT addresses all these issues and aims to offer help in making retrieval studies more efficient and effective.

The design of the RAT prototype has been guided by the requirement of researchers to query external information retrieval systems in general, and search engines in particular. This need derives from researchers' interest in the quality of their search results, the composition of their results (lists or pages), and potential biases in the results, to name but a few. This is a unique approach that separates RAT from software developed to aid retrieval evaluation where researchers have complete control over the systems evaluated.

RAT allows studies to be conducted under the two major evaluation paradigms, namely system-centered and user-centered evaluations.

RAT will also be useful to any other researchers aiming to use results from search engines for further analysis. While the basis of RAT lies in information retrieval evaluation, its use goes well beyond this research area. Already in the prototype phase, we found that there is a significant need for a software tool like RAT in the search engine and information retrieval communities, respectively. Some projects have already demonstrated this.

What do you have to do to use RAT?

Currently, RAT is still in development but you can find a demo version at http://rat-software.org/home.

Download Poster

Upcoming Event:

Join us for the RAT Community Meeting, a gathering of enthusiastic users and interested parties who are passionate about exploring and analyzing search engine data. Whether you’re an experienced researcher, have used RAT in your studies, or are a newcomer eager to dive into this exciting field, this event is designed especially for you. Best of all, attendance is completely free!

Connect with like-minded researchers and build your professional network during socializing and networking sessions. Engage in meaningful conversations, share experiences, and collaborate with others who are also passionate about leveraging search engine data for their research.

You can look forward to:

  •  Tutorials
  •  Showcases
  •  Poster Sessions

Don’t miss this invaluable opportunity to be a part of the RAT Community Meeting. Register here to secure your place among a vibrant community of users and interested parties. We can’t wait to welcome you to this enriching event!


Schultheiß, S.; Lewandowski, D.; von Mach, S.; Yagci, N. (2023). Query sampler: generating query sets for analyzing search engines using keyword research tools. In: PeerJ Computer Science 9(e1421). http://doi.org/10.7717/peerj-cs.1421

Lewandowski, D., & Sünkler, S. (2019). Das Relevance Assessment Tool. Eine modulare Software zur Unterstützung bei der Durchführung vielfältiger Studien mit Suchmaschinen. In: Information – Wissenschaft & Praxis 70 (1), 46-56. https://doi.org/10.1515/iwp-2019-0007

Lewandowski, Dirk; Sünkler, Sebastian: Relevance Assessment Tool: Ein Werkzeug zum Design von Retrievaltests sowie zur weitgehend automatisierten Erfassung, Aufbereitung und Auswertung der Daten. In: Proceedings der 2. DGI-Konferenz: Social Media und Web Science – Das Web als Lebensraum. Frankfurt am Main: DGI, 2012, S. 237-249.

Lewandowski, D.; Sünkler, S.: Designing search engine retrieval effectiveness tests with RAT. Information Services & Use 33(1), 53-59, 2013. https://doi.org/10.3233/ISU-130691


Lewandowski, D., Sünkler, S. & Sygulla, D. (2022, 06-07. October). Result Assessment Tool (RAT). Informationswissenschaft im Wandel. Wissenschaftliche Tagung 2022 (IWWT22), Düsseldorf.

Research that used RAT to collect and analyse data (selected)

Norocel, O.C.; Lewandowski, D. (2023):. Google, data voids, and the dynamics of the politics of exclusion. In: Big Data & Society. https://doi.org/10.1177/205395172211490

Haider, J.; Ekström, B.: Tattersall Wallin, E.; Gunnarsson Lorentzen, D.; Rödl, M.; Söderberg, N. (2023). Tracing online information aboutwind power in Sweden: An exploratory quantitative study of broader trends. https://www.diva-portal.org/smash/get/diva2:1740876/FULLTEXT01.pdf

Sünkler, S.; Lewandowski, D.: Does it matter which search engine is used? A user study using post-task relevance judgments. In: Proceedings of the 80th Annual Meeting of the Association of Information Science and Technology, Crystal City, VA, USA. https://doi.org/10.1002/pra2.2017.14505401044

Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How Relevant is the Long Tail? A Relevance Assessment Study on Million Short. In: N. Fuhr et al. (eds.): Experimental IR Meets Multilinguality, Multimodality, and Interaction (Lecture Notes in Computer Science, Vol. 9822), pp. 227-233. https://doi.org/10.1007/978-3-319-44564-9_20

Behnert, C.: LibRank: New Approaches for Relevance Ranking in Library Information Systems. In: Pehar, F.; Schlögl, C.; Wolff, C. (eds.): Re:inventing Information Science in the Networked Society. Proceedings of the 14th International Symposium on Information Science (ISI 2015). Glückstadt: Verlag Werner Hülsbusch, 2015. S. 570-572

Lewandowski, D.: Evaluating the retrieval effectiveness of web search engines using a representative query sample. In: Journal of the American Society for Information Science and Technology (JASIST) Vol. 66 (2015) Nr. 9, p. 1763 – 1775. https://doi.org/10.1002/asi.23304

Lewandowski, D.: Verwendung von Skalenbewertungen in der Evaluierung von Web-Suchmaschinen. In: Hobohm, H.-C. (Hrsg.): Informationswissenschaft zwischen virtueller Infrastruktur und materiellen Lebenswelten. Proceedings des 13. Internationalen Symposiums für Informationswissenschaft (ISI 2013). Boizenburg: Verlag Werner Hülsbusch, 2013. S. 339-348.

Funded by the German Research Foundation (DFG – Deutsche Forschungsgemeinschaft):
Funding period: 08/2021 - 07/2024, grant number 460676551.