Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: TC for Metric P0 nv_load_time per model #7697

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

indrajit96
Copy link
Contributor

@indrajit96 indrajit96 commented Oct 14, 2024

What does the PR do?

Test Case of model load time metrics

Checklist

  • PR title reflects the change and is of format <commit_type>: <Title>
  • Changes are described in the pull request.
  • Related issues are referenced.
  • Populated github labels field
  • Added test plan and verified test passes.
  • Verified that the PR passes existing CI.
  • Verified copyright is correct on all changed files.
  • Added succinct git squash message before merging ref.
  • All template sections are filled out.
  • Optional: Additional screenshots for behavior/output changes with before/after.

Commit Type:

Check the conventional commit type
box here and add the label to the github PR.

  • build
  • ci
  • docs
  • feat
  • fix
  • perf
  • refactor
  • revert
  • style
  • test

Related PRs:

Core : triton-inference-server/core#397

Where should the reviewer start?

qa/L0_metrics/general_metrics_test.py

Test plan:

Added tests for

  1. Normal Mode Model Load
  2. Explicit Model Load
  3. Explicit Model Unload

Background

Improve metrics in Triton

@indrajit96 indrajit96 changed the title TC for Metric P0 nv_load_time per model test: TC for Metric P0 nv_load_time per model Oct 14, 2024
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
@indrajit96 indrajit96 requested a review from kthui October 22, 2024 01:04
kthui
kthui previously approved these changes Oct 22, 2024
Copy link
Contributor

@kthui kthui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! Make sure the CI passes before merging.

#### Load Time Per-Model
The *Model Load Duration* reflects the time to load a model from storage into GPU/CPU in seconds.
```
# HELP nv_model_load_duration_secs Model load time in seconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need a sample output for a gauge metric?

qa/L0_metrics/general_metrics_test.py Outdated Show resolved Hide resolved
qa/L0_metrics/general_metrics_test.py Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Show resolved Hide resolved
# Test 3 for explicit mode UNLOAD
python3 -m pytest --junitxml="general_metrics_test.test_metrics_load_time_explicit_unload.report.xml" $CLIENT_PY::TestGeneralMetrics::test_metrics_load_time_explicit_unload >> $CLIENT_LOG 2>&1
kill_server
set -e

# Test 4 for explicit mode LOAD and UNLOAD with multiple versions
set +e
CLIENT_PY="./general_metrics_test.py"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove

print(f"Model '{model_name}' loaded successfully.")
else:
except AssertionError:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want the test to pass if failed to load the model? If not, you should remove try...except...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's expected behaviour.
Models should load and unload. Else test should fail as subsequent metrics will be incorrect

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If load or unload failure will result test to fail anyway, why not let it fail at the HTTP response code check instead of metrics check? This way people can easiler identify the root cause of job failure.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come the core PR was merged way before this one finished? We currently have no ongoing tests for the merged feature on our nightly pipelines in core, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was approved in parallel. A couple of days appart.
I was unable to get a CI passing due to other build issues.
And then @yinggeh added more comments after it was approved. Hence the delay.
Yes I will get this in ASAP after the trtllm Code freeze

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants