RAG Playground πŸ›#

Demo

A live version of the application is hosted on Streamlit, try it out yourself using the link below: RAG Playground on Streamlit

Introduction#

Streamlit application that enables users to upload a pdf file and chat with an LLM for performing document analysis in a playground environment. Compare the performance of LLMs across endpoint providers to find the best possible configuration for your speed, latency and cost requirements using the dynamic routing feature. Play intuitively tuning the model hyperparameters as temperature, chunk size, chunk overlap or try the model with/without conversational capabilities.

You find more model/provider information in the Unify benchmark interface.

Usage#

  1. Visit the application: RAG Playground

  2. Input your Unify API Key. If you don’t have one yet, log in to the Unify Console to get yours.

  3. Select the Model and endpoint provider of your choice from the drop-down menu. You can find both model and provider information in the benchmark interface.

  4. Upload your document(s) and click the Submit button.

  5. Enjoy the application!

Repository and Local Deployment#

The repository is located at RAG Playground Repository.

To run the application locally, follow these steps:

  1. Clone the repository to your local machine.

  2. Set up your virtual environment and install the dependencies from requirements.txt:

python -m venv .venv
source .venv/bin/activate  # On Windows use `.venv\Scripts\activate`
pip install -r requirements.txt
  1. Run rag_script.py from Streamlit module

python -m streamlit run rag_script.py

Contributors#

Name

GitHub Profile

Anthony Okonneh

AO

Oscar Arroyo Vega

OscarAV

Martin Oywa

Martin Oywa