Skip to content

Commit

Permalink
Merge pull request #270 from freddiev4/evaluating-search-engines
Browse files Browse the repository at this point in the history
Add notebook for "Evaluating AI Search Engines with the `judges` Library"
  • Loading branch information
stevhliu authored Jan 31, 2025
2 parents 8273c06 + 06928bf commit 52e7130
Show file tree
Hide file tree
Showing 3 changed files with 1,643 additions and 0 deletions.
2 changes: 2 additions & 0 deletions notebooks/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@
title: RAG Evaluation
- local: llm_judge
title: Using LLM-as-a-judge for an automated and versatile evaluation
- local: llm_judge_evaluating_ai_search_engines_with_judges_library
title: Evaluating AI Search Engines with `judges` - the open-source library for LLM-as-a-judge evaluators
- local: issues_in_text_dataset
title: Detecting Issues in a Text Dataset with Cleanlab
- local: annotate_text_data_transformers_via_active_learning
Expand Down
1 change: 1 addition & 0 deletions notebooks/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ applications and solving various machine learning tasks using open-source tools

Check out the recently added notebooks:

- [Evaluating AI Search Engines with `judges` - the open-source library for LLM-as-a-judge evaluators](llm_judge_evaluating_ai_search_engines_with_judges_library)
- [Structured Generation from Images or Documents Using Vision Language Models](structured_generation_vision_language_models)
- [Vector Search on Hugging Face with the Hub as Backend](vector_search_with_hub_as_backend)
- [Multi-Agent Order Management System with MongoDB](mongodb_smolagents_multi_micro_agents)
Expand Down
Loading

0 comments on commit 52e7130

Please sign in to comment.