chatbot / README.md
README.md
Raw

#RAG Chatbot Modifications

LlamaIndex and Together.ai RAG chatbot

Getting Started

Copy your .example.env file into a .env and add values to the env variables

  1. Install the dependencies.
npm install
  1. Generate the embeddings and store them locally in the cache folder. You can also provide a PDF in the data folder instead of the default one.
npm run generate
  1. Run the app and send messages to your chatbot. It will use context from the embeddings to answer questions.
npm run dev

How to deploy

Make sure you have docker, terraform, and aws cli tools installed locally AND have a dockerhub account and an AWS account AND your aws cli tool is configured properly with credentials as terraform will need the credentials to deploy resources.

Prepare docker image

Build docker image

docker buildx build --platform linux/amd64 . -t <your_docker_hub_name>/headstarter-ai-chatbot

Log-in your dockerhub account

docker login

Push your image to your repository, make sure the repository public (by default it is)

docker push <your_docker_hub_name>/headstarter-ai-chatbot 

Deploy EC2 and run container inside using Terraform

Before running terraform commands, make sure

  • your AWS credentials AWS Access Key ID and AWS Secret Access Key are congiured in aws cli locally.
  • have a SSH pub/private key pair ready at ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub
  • your local env var file is at root dir and callled .env.local
cd deploy
terraform init
terraform apply --auto-approve

Destroy EC2

terraform destroy --auto-approve

Documentation