Replies: 1 comment
-
This is a very tricky topic and something I've been grappling with too. Consider using an independent measure of your overall rag pipeline (like the BIERS llama index implementation) and asses if there are any performance differences between two different parsing strategies. For direct benchmarks, have a flick through archivx, saw this after a couple of searches - might be worth your while! https://arxiv.org/pdf/2412.07626 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Does anyone know of any benchmark data sets that I could use to
evaluate LlamaParse versus other existing simpler solutions.
In the example code is one example of comparing to not using LlamaParse on the PDF,
but I want to do more than just some one off comparisons.
Thanks greatly in advance
Beta Was this translation helpful? Give feedback.
All reactions