Skip to content

Commit 44a8bef

Browse files
authored
Update README.md
Added a few badges
1 parent 9b92ca1 commit 44a8bef

File tree

1 file changed

+10
-0
lines changed

1 file changed

+10
-0
lines changed

README.md

+10
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,16 @@
33
</h1>
44
<br>
55

6+
7+
<h1 align="center">
8+
<a href="https://badge.fury.io/py/groqeval"><img src="https://badge.fury.io/py/groqeval.svg" alt="PyPI version" height="19"></a>
9+
<a href="https://codecov.io/github/djokester/groqeval" height="18">
10+
<img src="https://codecov.io/github/djokester/groqeval/graph/badge.svg?token=HS4K1Z7F3P"/>
11+
</a>
12+
<img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/djokester/groqeval/codecov.yml?branch=main&style=flat&label=Tests">
13+
14+
</h1>
15+
616
---
717

818
GroqEval is a powerful and easy-to-use evaluation framework designed specifically for language model (LLM) performance assessment. Utilizing the capabilities of Groq API, GroqEval provides developers, researchers, and AI enthusiasts with a robust set of tools to rigorously test and measure the relevance and accuracy of responses generated by language models.

0 commit comments

Comments
 (0)