Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[online doc] fix tab error in online doc #207

Merged
merged 1 commit into from
Oct 17, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 4 additions & 8 deletions examples/ChatQnA/deploy/xeon.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,8 +86,8 @@ there are 8 required and an optional docker images.
:::::{tab-item} Pull
:sync: Pull

If you decide to pull the docker containers and not build them locally,
you can proceed to the next step where all the necessary containers will
If you decide to pull the docker containers and not build them locally,
you can proceed to the next step where all the necessary containers will
be pulled in from dockerhub.

:::::
Expand Down Expand Up @@ -588,7 +588,7 @@ while reranking service are not.

### vLLM and TGI Service

In first startup, this service will take more time to download the model files.
In first startup, this service will take more time to download the model files.
After it's finished, the service will be ready.

Try the command below to check whether the LLM serving is ready.
Expand Down Expand Up @@ -649,11 +649,9 @@ TGI service generate text for the input prompt. Here is the expected result from
::::


```
### LLM Microservice

This service depends on above LLM backend service startup. It will be ready after long time,
This service depends on above LLM backend service startup. It will be ready after long time,
to wait for them being ready in first startup.

::::{tab-set}
Expand Down Expand Up @@ -687,8 +685,6 @@ For parameters in TGI modes, please refer to [HuggingFace InferenceClient API](h
::::


You will get generated text from LLM:

```
Expand Down