-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report RPS for autoscaler metrics #5238
Report RPS for autoscaler metrics #5238
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@taragu: 0 warnings.
In response to this:
/lint
Part of #5228
Proposed Changes
- Add reporting for stable and panic RPS metrics
Release Note
NONE
/cc @yanweiguo
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi @taragu. Thanks for your PR. I'm waiting for a knative member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
config/monitoring/metrics/prometheus/100-grafana-dash-knative-scaling.yaml
Outdated
Show resolved
Hide resolved
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert knative/pkg#589.
panicRPSM = stats.Float64( | ||
"panic_requests_per_second", | ||
"Average requests-per-second per observed pod over the panic window", | ||
stats.UnitDimensionless) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add targetRPS
as well?
@@ -53,6 +53,14 @@ var ( | |||
"panic_request_concurrency", | |||
"Average of requests count per observed pod over the panic window", | |||
stats.UnitDimensionless) | |||
stableRPSM = stats.Float64( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to register these metrics in func register
.
/hold cancel |
e3cf049
to
ba70dd1
Compare
pkg/autoscaler/multiscaler.go
Outdated
@@ -48,6 +48,8 @@ type DeciderSpec struct { | |||
// The value of scaling metric per pod that we target to maintain. | |||
// TargetValue <= TotalValue. | |||
TargetValue float64 | |||
// The value of requests-per-second per pod that we target to maintain. | |||
TargetRPS float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this. TargetValue
is intended to be used for any metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh got it!
"expr": "sum(autoscaler_target_requests_per_second{namespace_name=\"$namespace\", configuration_name=\"$configuration\", revision_name=\"$revision\"})", | ||
"format": "time_series", | ||
"interval": "1s", | ||
"intervalFactor": 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to change the title
for this dashboard as well or create a new one for RPS.
@@ -65,6 +65,7 @@ func MakeDecider(ctx context.Context, pa *v1alpha1.PodAutoscaler, config *autosc | |||
MaxScaleUpRate: config.MaxScaleUpRate, | |||
ScalingMetric: pa.Metric(), | |||
TargetValue: target, | |||
TargetRPS: config.RPSTargetDefault, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As @yanweiguo mentioned, this shouldn't be needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still here.
Hi @taragu, thanks for this PR. Could you test it end to end to get a screenshot of the Grafana dashboard? I'm not confident with what it will look like when we put concurrency and RPS into one table. |
5d3d452
to
0f4a00c
Compare
@yanweiguo I've updated the PR to put the RPS metrics in a separate graph. Would you please review the PR again? This is what the dashboard looks like |
0f4a00c
to
ad886da
Compare
/ok-to-test |
The following jobs failed due to test flakiness:
Failed non-flaky tests preventing automatic retry of pull-knative-serving-integration-tests:
|
ad886da
to
aead62f
Compare
The following is the coverage report on pkg/.
|
/lgtm |
/assign @mdemirhan |
/approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: taragu, vagababov, yanweiguo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lint
Part of #5228
Proposed Changes
Release Note
/cc @yanweiguo