Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ChatQnA example with Falcon LLM #560

Open
arun-gupta opened this issue Aug 8, 2024 · 11 comments
Open

Update ChatQnA example with Falcon LLM #560

arun-gupta opened this issue Aug 8, 2024 · 11 comments
Assignees
Labels
Hacktoberfest OPEAHack Issue created for OPEA Hackathon

Comments

@arun-gupta
Copy link
Contributor

arun-gupta commented Aug 8, 2024

Update ChatQnA example that uses Falcon as the LLM.

This would require to include Falcon as part of the validation at https://github.com/opea-project/GenAIComps/tree/main/comps/llms. And then create an updated ChatQnA that would use this microservice to use Falcon LLM.

@arun-gupta arun-gupta changed the title ChatQnA example with Falcon LLM Update ChatQnA example with Falcon LLM Aug 8, 2024
@chickenrae chickenrae added the OPEAHack Issue created for OPEA Hackathon label Aug 8, 2024
@lucasmelogithub
Copy link

Supporting Falcon-11B would be great.

@kevinintel
Copy link
Collaborator

TGI-Gaudi and vllm supports Falcon 40B and Flacon 7B.
We will validate Falcon-11B

@lucasmelogithub
Copy link

TGI-Gaudi and vllm supports Falcon 40B and Flacon 7B. We will validate Falcon-11B

Great, thanks for the update.

@chickenrae
Copy link
Member

@kevinintel This is marked for the OPEA Hackathon, are you going to complete this in October? If not, can you unassign yourself so we can have someone take this on.

@lucasmelogithub
Copy link

Question, Models are set with environment variables via set_env.sh https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh

What is our strategy? Create multiple set_env.sh. set_env_falcon11B.sh?
Or just update the README.md with instructions?

In the Terraform Module we developed, we are creating our own set_env.sh and setting the model.
I have plans to contribute links to these modules back to OPEA via README.md links, I'll open the PR as a draft for discussion soon.
https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B
https://github.com/intel/optimized-cloud-recipes/tree/main/recipes/ai-opea-chatqna-xeon-falcon11B

@chickenrae
Copy link
Member

@arun-gupta Should be able to give some guidance.

@arun-gupta
Copy link
Contributor Author

This should really be somebody from engineering. @kding1 @mkbhanda ?

@lucasmelogithub
Copy link

lucasmelogithub commented Oct 11, 2024

I'm also open to a call with OPEA contributors if easier to brainstorm.

I think we need to discuss at least:

  • How to handle multiple models. Multiple set_env.sh vs. README instructions, etc.
  • Terraform/Ansible modularization and repo location.

On Terraform/Ansible, those have more usecases than just OPEA(and were developed before OPEA), that's why they are today in other repos. Open to discuss the best options for usability and version control.

@mkbhanda
Copy link
Collaborator

@lucasmelogithub let us not proliferate set_env.sh(es) with only model_id different :-) That set_env.sh really is a file a user is expected to edit, with proxy, ip address, model id etc values/choices as the case may be. I like how @kevinintel offered to verify that falcon 11B works with TGI and vLLM model servers. Typically these are tested also by the model providers given these two model servers are popular. May I suggest you update the README file with a table that shares all the models verified to work (and add a date) because this list may go out of date too soon! We could also in the set_env.sh file provide a list of model_ids (again this can never hope to be exhaustive! Just a few popular ones that we have tested. And comment all but one as a potential default. What will be crucial is that if a model is very large the VM instance if using docker or Kubernetes worker nodes need to be large enough. So in that sense a model choice, small/medium/large/extra large has other ramifications.

@lucasmelogithub
Copy link

lucasmelogithub commented Oct 11, 2024

@lucasmelogithub let us not proliferate set_env.sh(es) with only model_id different :-) That set_env.sh really is a file a user is expected to edit, with proxy, ip address, model id etc values/choices as the case may be. I like how @kevinintel offered to verify that falcon 11B works with TGI and vLLM model servers. Typically these are tested also by the model providers given these two model servers are popular. May I suggest you update the README file with a table that shares all the models verified to work (and add a date) because this list may go out of date too soon! We could also in the set_env.sh file provide a list of model_ids (again this can never hope to be exhaustive! Just a few popular ones that we have tested. And comment all but one as a potential default. What will be crucial is that if a model is very large the VM instance if using docker or Kubernetes worker nodes need to be large enough. So in that sense a model choice, small/medium/large/extra large has other ramifications.

Agree with the REAME.MD approach, thanks for the direction. I will create a PR next week with an LLM table.
I have sucessfully tested Falcon-11B with TGI, I can test with vllm too and will make the README reflect that.

We(Intel) have partnered with TII/AWS to showcase Falcon-11B on OPEA.
AWS will demo OPEA + Falcon-11B using our Intel Cloud Optimization Modules for Terraform/Ansible on AWS on a huge conference(GITEX) next week .
https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B

image

@lucasmelogithub
Copy link

PR Created #970

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Hacktoberfest OPEAHack Issue created for OPEA Hackathon
Projects
None yet
Development

No branches or pull requests

5 participants