Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: kubernetes/client-go
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.19.3
Choose a base ref
...
head repository: kubernetes/client-go
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.19.4
Choose a head ref
  • 10 commits
  • 10 files changed
  • 4 contributors

Commits on Aug 3, 2019

  1. Generate complete leader election record to resolve leader election i…

    …ssues with LeaseLocks
    
    Kubernetes-commit: acadf81c6832dc55e109bba729da71cd69247958
    zachomedia authored and k8s-publishing-bot committed Aug 3, 2019
    Copy the full SHA
    b32de8e View commit details

Commits on Apr 22, 2020

  1. Add lease release tests in leader election

    Kubernetes-commit: e94ec96e39a3f91d70fa76b5a060b28c79e8d9a9
    zachomedia authored and k8s-publishing-bot committed Apr 22, 2020
    Copy the full SHA
    e65aa52 View commit details

Commits on Oct 7, 2020

  1. don't cache transports for incomparable configs

    Co-authored-by: Jordan Liggitt <liggitt@google.com>
    
    Kubernetes-commit: f8607b0449ea8dc9d5c7cbc6e829dfec2f8764fc
    roycaihw authored and k8s-publishing-bot committed Oct 7, 2020

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    400bca4 View commit details

Commits on Oct 20, 2020

  1. Merge pull request #95618 from roycaihw/automated-cherry-pick-of-#954…

    …27-upstream-release-1.19
    
    Automated cherry pick of #95427: don't cache transports for incomparable configs
    
    Kubernetes-commit: f88d259a46257141584312c7b38069a982cd90cc
    k8s-publishing-bot committed Oct 20, 2020

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    fa0b9c6 View commit details

Commits on Oct 28, 2020

  1. Merge pull request #95926 from dprotaso/automated-cherry-pick-of-#809…

    …54-upstream-release-1.19
    
    Automated cherry pick of #80954: Generate complete leader election record to resolve
    
    Kubernetes-commit: 5f1a8f61bf7d5e0cbbbd5dab35e4c3f02218c200
    k8s-publishing-bot committed Oct 28, 2020

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    79fff96 View commit details
  2. Add failing test showing release is not working properly

    Kubernetes-commit: b0bb48abe05d81ce83dcca071057d039832edff6
    dprotaso authored and k8s-publishing-bot committed Oct 28, 2020
    Copy the full SHA
    eedba60 View commit details
  3. Don't clear the cached resourcelock when errors occurs on updates

    This allows the lock to be release normally - even with a
    potentially stale lock. This flow should only occur when we're
    the lease holders.
    
    Kubernetes-commit: 1d10ae05c17e86befdab23cf24c3d0493b832881
    dprotaso authored and k8s-publishing-bot committed Oct 28, 2020
    Copy the full SHA
    478748c View commit details
  4. Re-add the event recorder in the release test

    Prior having a mock recorder would cause panics since the lock
    would be set to nil on update failures. Now the recorder will
    use the cached lock
    
    Kubernetes-commit: 7622eb6a89cb7f7d62a5c7d1d845959fdc8e268b
    dprotaso authored and k8s-publishing-bot committed Oct 28, 2020
    Copy the full SHA
    a7c6cd2 View commit details

Commits on Nov 5, 2020

  1. Merge pull request #95963 from dprotaso/automated-cherry-pick-of-#959…

    …39-upstream-release-1.19
    
    Automated cherry pick of #95939: Address scenario where releasing a resource lock fails if a prior update fails or gets cancelled
    
    Kubernetes-commit: c92290add7b0071f06a9ea4d9030b8eb2e67dd7c
    k8s-publishing-bot committed Nov 5, 2020
    Copy the full SHA
    4fcdf7e View commit details

Commits on Nov 13, 2020

  1. Copy the full SHA
    2bb8681 View commit details
4 changes: 2 additions & 2 deletions Godeps/Godeps.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions go.mod
Original file line number Diff line number Diff line change
@@ -26,14 +26,14 @@ require (
golang.org/x/net v0.0.0-20200707034311-ab3426394381
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6
golang.org/x/time v0.0.0-20191024005414-555d28b269f0
k8s.io/api v0.0.0-20200821172135-21b59c1ded36
k8s.io/apimachinery v0.0.0-20200821171749-b63a0c883fbf
k8s.io/api v0.19.4
k8s.io/apimachinery v0.19.4
k8s.io/klog/v2 v2.2.0
k8s.io/utils v0.0.0-20200729134348-d5654de09c73
sigs.k8s.io/yaml v1.2.0
)

replace (
k8s.io/api => k8s.io/api v0.0.0-20200821172135-21b59c1ded36
k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20200821171749-b63a0c883fbf
k8s.io/api => k8s.io/api v0.19.4
k8s.io/apimachinery => k8s.io/apimachinery v0.19.4
)
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
@@ -333,8 +333,8 @@ honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
k8s.io/api v0.0.0-20200821172135-21b59c1ded36/go.mod h1:WXzrXjAr+IgCMGkIbOU4i87rvN7UWNGsyNmBEkW9rx8=
k8s.io/apimachinery v0.0.0-20200821171749-b63a0c883fbf/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
k8s.io/api v0.19.4/go.mod h1:SbtJ2aHCItirzdJ36YslycFNzWADYH3tgOhvBEFtZAk=
k8s.io/apimachinery v0.19.4/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0 h1:XRvcwJozkgZ1UQJmfMGpvRthQHOvihEhYtDfAaxMz/A=
6 changes: 5 additions & 1 deletion tools/leaderelection/leaderelection.go
Original file line number Diff line number Diff line change
@@ -290,8 +290,12 @@ func (le *LeaderElector) release() bool {
if !le.IsLeader() {
return true
}
now := metav1.Now()
leaderElectionRecord := rl.LeaderElectionRecord{
LeaderTransitions: le.observedRecord.LeaderTransitions,
LeaderTransitions: le.observedRecord.LeaderTransitions,
LeaseDurationSeconds: 1,
RenewTime: now,
AcquireTime: now,
}
if err := le.config.Lock.Update(context.TODO(), leaderElectionRecord); err != nil {
klog.Errorf("Failed to release lock: %v", err)
281 changes: 281 additions & 0 deletions tools/leaderelection/leaderelection_test.go
Original file line number Diff line number Diff line change
@@ -917,3 +917,284 @@ func TestTryAcquireOrRenewEndpointsLeases(t *testing.T) {
func TestTryAcquireOrRenewConfigMapsLeases(t *testing.T) {
testTryAcquireOrRenewMultiLock(t, "configmapsleases")
}

func testReleaseLease(t *testing.T, objectType string) {
tests := []struct {
name string
observedRecord rl.LeaderElectionRecord
observedTime time.Time
reactors []Reactor

expectSuccess bool
transitionLeader bool
outHolder string
}{
{
name: "release acquired lock from no object",
reactors: []Reactor{
{
verb: "get",
objectType: objectType,
reaction: func(action fakeclient.Action) (handled bool, ret runtime.Object, err error) {
return true, nil, errors.NewNotFound(action.(fakeclient.GetAction).GetResource().GroupResource(), action.(fakeclient.GetAction).GetName())
},
},
{
verb: "create",
objectType: objectType,
reaction: func(action fakeclient.Action) (handled bool, ret runtime.Object, err error) {
return true, action.(fakeclient.CreateAction).GetObject(), nil
},
},
{
verb: "update",
objectType: objectType,
reaction: func(action fakeclient.Action) (handled bool, ret runtime.Object, err error) {
return true, action.(fakeclient.UpdateAction).GetObject(), nil
},
},
},
expectSuccess: true,
outHolder: "",
},
}

for i := range tests {
test := &tests[i]
t.Run(test.name, func(t *testing.T) {
// OnNewLeader is called async so we have to wait for it.
var wg sync.WaitGroup
wg.Add(1)
var reportedLeader string
var lock rl.Interface

objectMeta := metav1.ObjectMeta{Namespace: "foo", Name: "bar"}
resourceLockConfig := rl.ResourceLockConfig{
Identity: "baz",
EventRecorder: &record.FakeRecorder{},
}
c := &fake.Clientset{}
for _, reactor := range test.reactors {
c.AddReactor(reactor.verb, objectType, reactor.reaction)
}
c.AddReactor("*", "*", func(action fakeclient.Action) (bool, runtime.Object, error) {
t.Errorf("unreachable action. testclient called too many times: %+v", action)
return true, nil, fmt.Errorf("unreachable action")
})

switch objectType {
case "endpoints":
lock = &rl.EndpointsLock{
EndpointsMeta: objectMeta,
LockConfig: resourceLockConfig,
Client: c.CoreV1(),
}
case "configmaps":
lock = &rl.ConfigMapLock{
ConfigMapMeta: objectMeta,
LockConfig: resourceLockConfig,
Client: c.CoreV1(),
}
case "leases":
lock = &rl.LeaseLock{
LeaseMeta: objectMeta,
LockConfig: resourceLockConfig,
Client: c.CoordinationV1(),
}
}

lec := LeaderElectionConfig{
Lock: lock,
LeaseDuration: 10 * time.Second,
Callbacks: LeaderCallbacks{
OnNewLeader: func(l string) {
defer wg.Done()
reportedLeader = l
},
},
}
observedRawRecord := GetRawRecordOrDie(t, objectType, test.observedRecord)
le := &LeaderElector{
config: lec,
observedRecord: test.observedRecord,
observedRawRecord: observedRawRecord,
observedTime: test.observedTime,
clock: clock.RealClock{},
}
if !le.tryAcquireOrRenew(context.Background()) {
t.Errorf("unexpected result of tryAcquireOrRenew: [succeeded=%v]", true)
}

le.maybeReportTransition()

// Wait for a response to the leader transition, and add 1 so that we can track the final transition.
wg.Wait()
wg.Add(1)

if test.expectSuccess != le.release() {
t.Errorf("unexpected result of release: [succeeded=%v]", !test.expectSuccess)
}

le.observedRecord.AcquireTime = metav1.Time{}
le.observedRecord.RenewTime = metav1.Time{}
if le.observedRecord.HolderIdentity != test.outHolder {
t.Errorf("expected holder:\n\t%+v\ngot:\n\t%+v", test.outHolder, le.observedRecord.HolderIdentity)
}
if len(test.reactors) != len(c.Actions()) {
t.Errorf("wrong number of api interactions")
}
if test.transitionLeader && le.observedRecord.LeaderTransitions != 1 {
t.Errorf("leader should have transitioned but did not")
}
if !test.transitionLeader && le.observedRecord.LeaderTransitions != 0 {
t.Errorf("leader should not have transitioned but did")
}
le.maybeReportTransition()
wg.Wait()
if reportedLeader != test.outHolder {
t.Errorf("reported leader was not the new leader. expected %q, got %q", test.outHolder, reportedLeader)
}
})
}
}

// Will test leader election using endpoints as the resource
func TestReleaseLeaseEndpoints(t *testing.T) {
testReleaseLease(t, "endpoints")
}

// Will test leader election using endpoints as the resource
func TestReleaseLeaseConfigMaps(t *testing.T) {
testReleaseLease(t, "configmaps")
}

// Will test leader election using endpoints as the resource
func TestReleaseLeaseLeases(t *testing.T) {
testReleaseLease(t, "leases")
}

func TestReleaseOnCancellation_Endpoints(t *testing.T) {
testReleaseOnCancellation(t, "endpoints")
}

func TestReleaseOnCancellation_ConfigMaps(t *testing.T) {
testReleaseOnCancellation(t, "configmaps")
}

func TestReleaseOnCancellation_Leases(t *testing.T) {
testReleaseOnCancellation(t, "leases")
}

func testReleaseOnCancellation(t *testing.T, objectType string) {
var (
onNewLeader = make(chan struct{})
onRenewCalled = make(chan struct{})
onRenewResume = make(chan struct{})
onRelease = make(chan struct{})

lockObj runtime.Object
updates int
)

resourceLockConfig := rl.ResourceLockConfig{
Identity: "baz",
EventRecorder: &record.FakeRecorder{},
}
c := &fake.Clientset{}

c.AddReactor("get", objectType, func(action fakeclient.Action) (handled bool, ret runtime.Object, err error) {
if lockObj != nil {
return true, lockObj, nil
}
return true, nil, errors.NewNotFound(action.(fakeclient.GetAction).GetResource().GroupResource(), action.(fakeclient.GetAction).GetName())
})

// create lock
c.AddReactor("create", objectType, func(action fakeclient.Action) (handled bool, ret runtime.Object, err error) {
lockObj = action.(fakeclient.CreateAction).GetObject()
return true, lockObj, nil
})

c.AddReactor("update", objectType, func(action fakeclient.Action) (handled bool, ret runtime.Object, err error) {
updates++

// Second update (first renew) should return our canceled error
// FakeClient doesn't do anything with the context so we're doing this ourselves
if updates == 2 {
close(onRenewCalled)
<-onRenewResume
return true, nil, context.Canceled
} else if updates == 3 {
close(onRelease)
}

lockObj = action.(fakeclient.UpdateAction).GetObject()
return true, lockObj, nil

})

c.AddReactor("*", "*", func(action fakeclient.Action) (bool, runtime.Object, error) {
t.Errorf("unreachable action. testclient called too many times: %+v", action)
return true, nil, fmt.Errorf("unreachable action")
})

lock, err := rl.New(objectType, "foo", "bar", c.CoreV1(), c.CoordinationV1(), resourceLockConfig)
if err != nil {
t.Fatal("resourcelock.New() = ", err)
}

lec := LeaderElectionConfig{
Lock: lock,
LeaseDuration: 15 * time.Second,
RenewDeadline: 2 * time.Second,
RetryPeriod: 1 * time.Second,

// This is what we're testing
ReleaseOnCancel: true,

Callbacks: LeaderCallbacks{
OnNewLeader: func(identity string) {},
OnStoppedLeading: func() {},
OnStartedLeading: func(context.Context) {
close(onNewLeader)
},
},
}

elector, err := NewLeaderElector(lec)
if err != nil {
t.Fatal("Failed to create leader elector: ", err)
}

ctx, cancel := context.WithCancel(context.Background())

go elector.Run(ctx)

// Wait for us to become the leader
select {
case <-onNewLeader:
case <-time.After(10 * time.Second):
t.Fatal("failed to become the leader")
}

// Wait for renew (update) to be invoked
select {
case <-onRenewCalled:
case <-time.After(10 * time.Second):
t.Fatal("the elector failed to renew the lock")
}

// Cancel the context - stopping the elector while
// it's running
cancel()

// Resume the update call to return the cancellation
// which should trigger the release flow
close(onRenewResume)

select {
case <-onRelease:
case <-time.After(10 * time.Second):
t.Fatal("the lock was not released")
}
}
8 changes: 6 additions & 2 deletions tools/leaderelection/resourcelock/configmaplock.go
Original file line number Diff line number Diff line change
@@ -92,8 +92,12 @@ func (cml *ConfigMapLock) Update(ctx context.Context, ler LeaderElectionRecord)
cml.cm.Annotations = make(map[string]string)
}
cml.cm.Annotations[LeaderElectionRecordAnnotationKey] = string(recordBytes)
cml.cm, err = cml.Client.ConfigMaps(cml.ConfigMapMeta.Namespace).Update(ctx, cml.cm, metav1.UpdateOptions{})
return err
cm, err := cml.Client.ConfigMaps(cml.ConfigMapMeta.Namespace).Update(ctx, cml.cm, metav1.UpdateOptions{})
if err != nil {
return err
}
cml.cm = cm
return nil
}

// RecordEvent in leader election while adding meta-data
8 changes: 6 additions & 2 deletions tools/leaderelection/resourcelock/endpointslock.go
Original file line number Diff line number Diff line change
@@ -87,8 +87,12 @@ func (el *EndpointsLock) Update(ctx context.Context, ler LeaderElectionRecord) e
el.e.Annotations = make(map[string]string)
}
el.e.Annotations[LeaderElectionRecordAnnotationKey] = string(recordBytes)
el.e, err = el.Client.Endpoints(el.EndpointsMeta.Namespace).Update(ctx, el.e, metav1.UpdateOptions{})
return err
e, err := el.Client.Endpoints(el.EndpointsMeta.Namespace).Update(ctx, el.e, metav1.UpdateOptions{})
if err != nil {
return err
}
el.e = e
return nil
}

// RecordEvent in leader election while adding meta-data
11 changes: 8 additions & 3 deletions tools/leaderelection/resourcelock/leaselock.go
Original file line number Diff line number Diff line change
@@ -71,9 +71,14 @@ func (ll *LeaseLock) Update(ctx context.Context, ler LeaderElectionRecord) error
return errors.New("lease not initialized, call get or create first")
}
ll.lease.Spec = LeaderElectionRecordToLeaseSpec(&ler)
var err error
ll.lease, err = ll.Client.Leases(ll.LeaseMeta.Namespace).Update(ctx, ll.lease, metav1.UpdateOptions{})
return err

lease, err := ll.Client.Leases(ll.LeaseMeta.Namespace).Update(ctx, ll.lease, metav1.UpdateOptions{})
if err != nil {
return err
}

ll.lease = lease
return nil
}

// RecordEvent in leader election while adding meta-data
Loading