#RAG Chatbot Modifications [LlamaIndex](https://www.llamaindex.ai/) and [Together.ai](https://www.together.ai/) RAG chatbot ## Getting Started Copy your `.example.env` file into a `.env` and add values to the env variables 1. Install the dependencies. ``` npm install ``` 2. Generate the embeddings and store them locally in the `cache` folder. You can also provide a PDF in the `data` folder instead of the default one. ``` npm run generate ``` 3. Run the app and send messages to your chatbot. It will use context from the embeddings to answer questions. ``` npm run dev ``` ## How to deploy Make sure you have `docker`, `terraform`, and `aws` cli tools installed locally AND have a dockerhub account and an AWS account AND your `aws` cli tool is configured properly with credentials as `terraform` will need the credentials to deploy resources. ### Prepare docker image #### Build docker image ``` docker buildx build --platform linux/amd64 . -t /headstarter-ai-chatbot ``` #### Log-in your dockerhub account ``` docker login ``` #### Push your image to your repository, make sure the repository public (by default it is) ``` docker push /headstarter-ai-chatbot ``` ### Deploy EC2 and run container inside using Terraform Before running `terraform` commands, make sure - your AWS credentials `AWS Access Key ID` and `AWS Secret Access Key` are congiured in `aws` cli locally. - have a SSH pub/private key pair ready at `~/.ssh/id_rsa` and `~/.ssh/id_rsa.pub` - your local env var file is at root dir and callled `.env.local` ``` cd deploy terraform init terraform apply --auto-approve ``` ### Destroy EC2 ``` terraform destroy --auto-approve ``` ## Documentation - [Together AI Documentation](https://docs.together.ai/docs) - learn about Together.ai (inference, finetuning, embeddings) - [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex (Python features). - [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndex (Typescript features).