Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add IP mode field to loadbalancer status ingress #97681

Closed
wants to merge 1 commit into from

Conversation

Sh4d1
Copy link
Member

@Sh4d1 Sh4d1 commented Jan 4, 2021

Signed-off-by: Patrik Cyvoct patrik@ptrk.io

What type of PR is this?
/kind feature

What this PR does / why we need it:
Implements kubernetes/enhancements#1392
Which issue(s) this PR fixes:

Fixes #79783
Fixes #66607
Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Add IP mode field to loadbalancer status ingress

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/pull/1392

/assign @thockin

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Jan 4, 2021
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 4, 2021
@k8s-ci-robot
Copy link
Contributor

@Sh4d1: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/cloudprovider area/ipvs kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/apps Categorizes an issue or PR as relevant to SIG Apps. labels Jan 4, 2021
@k8s-ci-robot k8s-ci-robot added sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/network Categorizes an issue or PR as relevant to SIG Network. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 4, 2021
@@ -43,3 +45,15 @@ func SetDefaults_NetworkPolicy(obj *networkingv1.NetworkPolicy) {
}
}
}

func SetDefaults_Ingress(obj *networkingv1.Ingress) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @liggitt this should fix #92312 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not realize this struct was reused for Ingress! I don't think this is right. It happens to work for the fields that exist today, but I don't know of any Ingress implementation that is VIP-like.

I think we should probably clone the definition of type LoadBalancerStatus into the networking apigroup.

@bowei @rramkumar1 @aledbf - agree or disagree?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if we do that, it should be a standalone prefactoring commit. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should probably clone the definition of type LoadBalancerStatus into the networking apigroup.

+1

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't that be a breaking change in client-go? 🤔

@Sh4d1
Copy link
Member Author

Sh4d1 commented Jan 4, 2021

/test pull-kubernetes-e2e-kind-ipv6

Signed-off-by: Patrik Cyvoct <patrik@ptrk.io>
@Sh4d1
Copy link
Member Author

Sh4d1 commented Jan 4, 2021

/retest

@fejta-bot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@Sh4d1
Copy link
Member Author

Sh4d1 commented Jan 4, 2021

/retest

@Sh4d1
Copy link
Member Author

Sh4d1 commented Jan 4, 2021

/test pull-kubernetes-e2e-ubuntu-gce-network-policies

@Sh4d1
Copy link
Member Author

Sh4d1 commented Jan 4, 2021

/retest

@thockin thockin added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 4, 2021
Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for bringing this back.

// delivering traffic with the destination IP and port set to the node's IP and nodePort or to the pod's IP and targetPort.
// This field can only be set when the ip field is also set, and defaults to "VIP" if not specified.
// +optional
IPMode *LoadBalancerIPMode `json:"ipMode,omitempty" protobuf:"bytes,3,opt,name=ipMode"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am concerned that we are using tag 3 here, when tag 4 is already used. I can't see why PortStatus would have not used 3, and I must have missed that at review time.

@janosi - you added PortStatus in c970a46 - did you use 4 for a reason?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I remember that both PR were close in time, and this had to be reverted #96454 , is it possible that is the reason? that this PR predated the PortStatus one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So should I bump to 5 ? or keep 3? 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, I am sorry :( I am pretty sure I missed to fallback to 3 when the other PR was reverted :(

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fine, as long as we understand what happened :)

@@ -1160,8 +1160,8 @@ func (proxier *Proxier) syncProxyRules() {

// Capture load-balancer ingress.
fwChain := svcInfo.serviceFirewallChainName
for _, ingress := range svcInfo.LoadBalancerIPStrings() {
if ingress != "" {
for _, ingress := range svcInfo.LoadBalancerIngress() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future - it might have made these reviews a touch easier if there was a single "prefactoring" commit that made this (effectively no-op) structural change, and then the "real" commit would have been that much easier to see.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted! (I can still split it if needed 😄 )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Sh4d1 ,

One of my colleagues has a vested interest in moving this forward. Is it possible to go ahead and split things up as @thockin suggested? I imagine when he is able to conduct a Prod Readiness review that he'll need to start from scratch anyway due to the nearly two year gap in between the above comment and now. Since it would make it easier, maybe split them? I can help too if you like.

)
// jump to service firewall chain
writeLine(proxier.natRules, append(args, "-j", string(fwChain))...)
if !utilfeature.DefaultFeatureGate.Enabled(features.LoadBalancerIPMode) || *ingress.IPMode == v1.LoadBalancerIPModeVIP {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about capturing this as a helper function like:

func isVIP(ing *LoadBalancerIngress) bool {
    if !utilfeature.DefaultFeatureGate.Enabled(features.LoadBalancerIPMode) {
        return true // backwards compat
    }
    return ing.IPMode == v1.LoadBalancerIPModeVIP
}

This makes the eventual cleanup easier. We should probably be doing more of this.

for _, ingress := range svcInfo.LoadBalancerIPStrings() {
if ingress != "" {
for _, ingress := range svcInfo.LoadBalancerIngress() {
if ingress.IP != "" && (!utilfeature.DefaultFeatureGate.Enabled(features.LoadBalancerIPMode) || *ingress.IPMode == v1.LoadBalancerIPModeVIP) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment re: helper function


if len(ips) > 0 {
ipFamilyMap = utilproxy.MapIPsByIPFamily(ips)
correctIngresses, incorrectIngresses := utilproxy.FilterIncorrectLoadBalancerIngress(service.Status.LoadBalancer.Ingress, sct.ipFamily)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We switched from a "filter incorrect" model to a "map by family" model because it is easier to think about and more copyable as a pattern. Can we convert this to that model, please?

var invalidIngresses []v1.LoadBalancerIngress

for _, ing := range ingresses {
correctIP := MapIPsByIPFamily([]string{ing.IP})[ipFamily]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like going out of the way to use the existing API, but it doesn't really fit. You're going to waste time mapping by IP when you KNOW there's just one IP. I think it would be cleaner to iterate the input list, map by family, then return that map.

@@ -43,3 +45,15 @@ func SetDefaults_NetworkPolicy(obj *networkingv1.NetworkPolicy) {
}
}
}

func SetDefaults_Ingress(obj *networkingv1.Ingress) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not realize this struct was reused for Ingress! I don't think this is right. It happens to work for the fields that exist today, but I don't know of any Ingress implementation that is VIP-like.

I think we should probably clone the definition of type LoadBalancerStatus into the networking apigroup.

@bowei @rramkumar1 @aledbf - agree or disagree?

@@ -43,3 +45,15 @@ func SetDefaults_NetworkPolicy(obj *networkingv1.NetworkPolicy) {
}
}
}

func SetDefaults_Ingress(obj *networkingv1.Ingress) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if we do that, it should be a standalone prefactoring commit. :)

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@thockin thockin reopened this Oct 22, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Sh4d1
To complete the pull request process, please ask for approval from thockin after the PR has been reviewed.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

@Sh4d1: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-gci-gce-ipvs 7d62217 link false /test pull-kubernetes-e2e-gci-gce-ipvs
pull-kubernetes-node-e2e-containerd 7d62217 link true /test pull-kubernetes-node-e2e-containerd
pull-kubernetes-e2e-gce-alpha-features 7d62217 link false /test pull-kubernetes-e2e-gce-alpha-features
pull-kubernetes-e2e-kind 7d62217 link true /test pull-kubernetes-e2e-kind
pull-kubernetes-integration 7d62217 link true /test pull-kubernetes-integration
pull-kubernetes-e2e-gce-ubuntu-containerd 7d62217 link true /test pull-kubernetes-e2e-gce-ubuntu-containerd
pull-kubernetes-e2e-ubuntu-gce-network-policies 7d62217 link false /test pull-kubernetes-e2e-ubuntu-gce-network-policies
pull-kubernetes-e2e-kind-ipv6 7d62217 link true /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-conformance-kind-ga-only-parallel 7d62217 link true /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-gci-gce-ingress 7d62217 link false /test pull-kubernetes-e2e-gci-gce-ingress
pull-kubernetes-dependencies 7d62217 link true /test pull-kubernetes-dependencies
pull-kubernetes-unit 7d62217 link true /test pull-kubernetes-unit
pull-kubernetes-verify-govet-levee 7d62217 link true /test pull-kubernetes-verify-govet-levee
pull-kubernetes-verify 7d62217 link true /test pull-kubernetes-verify
pull-kubernetes-e2e-gce-100-performance 7d62217 link true /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-typecheck 7d62217 link true /test pull-kubernetes-typecheck

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@thockin thockin removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 10, 2021
@dims
Copy link
Member

dims commented Jan 10, 2022

Is this PR still needed, please rebase if so (or we can close it?)

@nschad
Copy link

nschad commented Jan 10, 2022

Is this PR still needed, please rebase if so (or we can close it?)

I think they are waiting for #106242 to be accepted/merged.

@thockin
Copy link
Member

thockin commented Jan 10, 2022 via email

@thockin
Copy link
Member

thockin commented Jan 21, 2022

It seems unlikely thatr I will get to this as a take-over for the 1.24 cycle

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@@ -1230,7 +1232,7 @@ func (proxier *Proxier) syncProxyRules() {
"-A", string(kubeExternalServicesChain),
Copy link

@polefishu polefishu Jul 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not wirte reject rule when IPMode == v1.LoadBalancerIPModeProxy?

@nschad
Copy link

nschad commented Nov 3, 2022

Hey,

now that #106242 is merged should we proceed with this PR or did we move the functionality to a different PR?

@thockin
Copy link
Member

thockin commented Nov 3, 2022 via email

@akutz
Copy link
Member

akutz commented Feb 2, 2023

If someone wants to pick it up, great! Otherwise it is in my queue
for "eventually"

On Thu, Nov 3, 2022 at 1:09 AM Niclas Schad @.***> wrote:

Hey,

now that #106242 is merged should we proceed with this PR or did we move the functionality to a different PR?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you modified the open/close state.Message ID: @.***>

I am taking a look. Tim, I may reach out on Slack. Thanks!

@aojea
Copy link
Member

aojea commented Mar 22, 2023

@thockin Can we consider re-opening this? My understanding is this still likely needs a new owner (I'm not sure if @Sh4d1 is still planning on moving forward with this).

this is a Pull request , the new owner should open a new one 😄

@CharlieR-o-o-t
Copy link

CharlieR-o-o-t commented Aug 22, 2023

@thockin @Sh4d1 I want to pick it up? Can I?

@thockin
Copy link
Member

thockin commented Aug 22, 2023

This one has been implemented now.

@CharlieR-o-o-t
Copy link

@thockin sorry, I trying to figure out why this PR has been closed without merge.
Was this KEP implemented in another PR?

Also, don't see any related changes in "master".

@aojea
Copy link
Member

aojea commented Aug 23, 2023

#119937

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cloudprovider area/ipvs cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet