Skip to content

Commit

Permalink
fix tab (#207)
Browse files Browse the repository at this point in the history
Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
  • Loading branch information
NeoZhangJianyu and arthw authored Oct 17, 2024
1 parent 3bd7844 commit 69ad41c
Showing 1 changed file with 4 additions and 8 deletions.
12 changes: 4 additions & 8 deletions examples/ChatQnA/deploy/xeon.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,8 +86,8 @@ there are 8 required and an optional docker images.
:::::{tab-item} Pull
:sync: Pull

If you decide to pull the docker containers and not build them locally,
you can proceed to the next step where all the necessary containers will
If you decide to pull the docker containers and not build them locally,
you can proceed to the next step where all the necessary containers will
be pulled in from dockerhub.

:::::
Expand Down Expand Up @@ -588,7 +588,7 @@ while reranking service are not.

### vLLM and TGI Service

In first startup, this service will take more time to download the model files.
In first startup, this service will take more time to download the model files.
After it's finished, the service will be ready.

Try the command below to check whether the LLM serving is ready.
Expand Down Expand Up @@ -649,11 +649,9 @@ TGI service generate text for the input prompt. Here is the expected result from
::::


```
### LLM Microservice

This service depends on above LLM backend service startup. It will be ready after long time,
This service depends on above LLM backend service startup. It will be ready after long time,
to wait for them being ready in first startup.

::::{tab-set}
Expand Down Expand Up @@ -687,8 +685,6 @@ For parameters in TGI modes, please refer to [HuggingFace InferenceClient API](h
::::


You will get generated text from LLM:

```
Expand Down

0 comments on commit 69ad41c

Please sign in to comment.