Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix error handling when trying to delete remote resources not present in Devfile #6659

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 17 additions & 22 deletions pkg/devfile/adapters/kubernetes/component/adapter.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,11 @@ import (
"fmt"
"reflect"
"strings"
sync2 "sync"

devfilefs "github.com/devfile/library/v2/pkg/testingutil/filesystem"
"golang.org/x/sync/errgroup"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/utils/pointer"

"github.com/redhat-developer/odo/pkg/binding"
Expand Down Expand Up @@ -693,40 +692,36 @@ func (a Adapter) deleteRemoteResources(objectsToRemove []unstructured.Unstructur
resources = append(resources, fmt.Sprintf("%s/%s", u.GetKind(), u.GetName()))
}

var err error
spinner := log.Spinnerf("Deleting resources not present in the Devfile: %s", strings.Join(resources, ", "))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we're sacrificing the spinner message to be able to use errgroup and make sure we don't run into race conditions, yes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could have used errgroup with the spinner, as the spinner is started outside of the group and its status is updated after all goroutines have exited.
The problem is that odo will continuously try to delete such resources that it couldn't delete (and display the spinner over and over again), which adds too much clutter to the terminal, IMO.
Also, I found the final status of the spinner to be a bit misleading by displaying , because it ignored PodMetrics resources (in the example below), but it didn't delete them actually as it will try again to delete them :-).

Sample output with the spinner
$ odo dev                                                                                                                                                                                                           
  __                                                                                                                                                                                                                
 /  \__     Developing using the "my-node-app" Devfile                                                                                                                                                              
 \__/  \    Namespace: default                                                                                                                                                                                      
 /  \__/    odo version: v3.8.0                                                                                                                                                                                     
 \__/                                                                                                                                                                                                               
                                                                                                                                                                                                                    
↪ Running on the cluster in Dev mode                                                                                                                                                                                
 •  Waiting for Kubernetes resources  ...                                                                                                                                                                           
 ✓  Creating resource Pod/k8s-deploybydefault-true-and-not-referenced                                                                                                                                               
 ✓  Creating resource Pod/k8s-deploybydefault-false-and-not-referenced                                                                                                                                              
 ✓  Creating resource Pod/ocp-deploybydefault-true-and-not-referenced                                                                                                                                               
 ✓  Creating resource Pod/k8s-deploybydefault-not-set-and-not-referenced                                                                                                                                            
 ✓  Creating resource Pod/ocp-deploybydefault-false-and-not-referenced                                                                                                                                              
 ✓  Creating resource Pod/ocp-deploybydefault-not-set-and-not-referenced                                                                                                                                            
 ⚠  Pod is Pending                                                                                                                                                                                                  
 ✓  Pod is Running                                                                                                                                                                                                  
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-referenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [186ms]
 ✓  Syncing files into the container [101ms]
 ✓  Building your application in container (command: build) [3s]
 •  Executing the application (command: start-app)  ...
 -  Forwarding from 127.0.0.1:20001 -> 3000


↪ Dev mode
 Status:
 Watching for changes in the current directory /home/asoro/work/tmp/5694-implement-autobuild-and-deploybydefault

 Keyboard Commands:
[Ctrl+c] - Exit and delete resources from the cluster 
     [p] - Manually apply local changes to the application on the cluster
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-referenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [215ms]
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-referenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [839ms]
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-referenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [860ms]
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-r
eferenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [853ms]
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-r
eferenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [849ms]
 ✓  Deleting resources not present in the Devfile: PodMetrics/k8s-deploybydefault-false-and-not-referenced, PodMetrics/k8s-deploybydefault-not-set-and-not-referenced, PodMetrics/k8s-deploybydefault-true-and-not-r
eferenced, PodMetrics/ocp-deploybydefault-false-and-not-referenced, PodMetrics/ocp-deploybydefault-not-set-and-not-referenced, PodMetrics/ocp-deploybydefault-true-and-not-referenced [855ms]

...

This is why I thought it would make sense to klog this instead.

defer spinner.End(err == nil)

var wg sync2.WaitGroup
// Delete the resources present on the cluster but not in the Devfile
klog.V(3).Infof("Deleting %d resource(s) not present in the Devfile: %s", len(resources), strings.Join(resources, ", "))
g := new(errgroup.Group)
for _, objectToRemove := range objectsToRemove {
wg.Add(1)
// Avoid re-use of the same `objectToRemove` value in each goroutine closure.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
objectToRemove := objectToRemove
go func() {
defer wg.Done()
var gvr schema.GroupVersionResource
gvr, err = a.kubeClient.GetGVRFromGVK(objectToRemove.GroupVersionKind())
g.Go(func() error {
gvr, err := a.kubeClient.GetGVRFromGVK(objectToRemove.GroupVersionKind())
if err != nil {
err = fmt.Errorf("unable to get information about resource: %s/%s: %s", objectToRemove.GetKind(), objectToRemove.GetName(), err.Error())
return
return fmt.Errorf("unable to get information about resource: %s/%s: %w", objectToRemove.GetKind(), objectToRemove.GetName(), err)
}

err = a.kubeClient.DeleteDynamicResource(objectToRemove.GetName(), gvr, true)
if err != nil {
if !kerrors.IsNotFound(err) || !kerrors.IsMethodNotSupported(err) {
err = fmt.Errorf("unable to delete resource: %s/%s: %s", objectToRemove.GetKind(), objectToRemove.GetName(), err.Error())
return
if !(kerrors.IsNotFound(err) || kerrors.IsMethodNotSupported(err)) {
return fmt.Errorf("unable to delete resource: %s/%s: %w", objectToRemove.GetKind(), objectToRemove.GetName(), err)
}
klog.V(1).Infof("Failed to delete resource: %s/%s; resource not found", objectToRemove.GetKind(), objectToRemove.GetName())
err = nil
klog.V(3).Infof("Failed to delete resource: %s/%s; resource not found or method not supported", objectToRemove.GetKind(), objectToRemove.GetName())
}
}()

return nil
})
}

wg.Wait()
return err
if err := g.Wait(); err != nil {
return err
}

return nil
}

// deleteServiceBindingSecrets takes a list of Service Binding secrets(unstructured) that should be deleted;
Expand Down
191 changes: 191 additions & 0 deletions pkg/devfile/adapters/kubernetes/component/adapter_test.go
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
package component

import (
"errors"
"testing"

"github.com/devfile/library/v2/pkg/devfile/generator"
"github.com/devfile/library/v2/pkg/devfile/parser/data"
"github.com/golang/mock/gomock"
"github.com/google/go-cmp/cmp"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"

"github.com/redhat-developer/odo/pkg/libdevfile"
"github.com/redhat-developer/odo/pkg/preference"
Expand Down Expand Up @@ -244,3 +248,190 @@ func TestAdapter_generateDeploymentObjectMeta(t *testing.T) {
})
}
}

func TestAdapter_deleteRemoteResources(t *testing.T) {
type fields struct {
kubeClientCustomizer func(kubeClient *kclient.MockClientInterface)
}
type args struct {
objectsToRemove []unstructured.Unstructured
}

var u1 unstructured.Unstructured
u1.SetGroupVersionKind(schema.GroupVersionKind{
Group: "metrics.k8s.io",
Version: "v1beta1",
Kind: "PodMetrics",
})
u1.SetName("my-pod-metrics")

var u2 unstructured.Unstructured
u2.SetGroupVersionKind(schema.GroupVersionKind{
Group: "postgresql.k8s.enterprisedb.io",
Version: "v1",
Kind: "Cluster",
})
u2.SetName("my-pg-cluster")
toRemove := []unstructured.Unstructured{u1, u2}

tests := []struct {
name string
fields fields
args args
wantErr bool
}{
{
name: "nothing to delete - nil list",
args: args{
objectsToRemove: nil,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().DeleteDynamicResource(gomock.Any(), gomock.Any(), gomock.Any()).Times(0)
},
},
wantErr: false,
},
{
name: "nothing to delete - empty list",
args: args{
objectsToRemove: []unstructured.Unstructured{},
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().DeleteDynamicResource(gomock.Any(), gomock.Any(), gomock.Any()).Times(0)
},
},
wantErr: false,
},
{
name: "error getting information about resource",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, errors.New("error on GetGVRFromGVK(u2)"))
// Only u1 should be deleted
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).Return(nil).Times(1)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).Times(0)
},
},
wantErr: true,
},
{
name: "generic error deleting resource",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).Return(nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).Return(errors.New("generic error while deleting u1"))
},
},
wantErr: true,
},
{
name: "generic error deleting all resources",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).
Return(errors.New("generic error while deleting u1"))
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).
Return(errors.New("generic error while deleting u2"))
},
},
wantErr: true,
},
{
name: "not found error deleting resource",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).Return(nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).
Return(kerrors.NewNotFound(schema.GroupResource{}, "u2"))
},
},
wantErr: false,
},
{
name: "method not allowed error deleting resource",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).Return(nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).
Return(kerrors.NewMethodNotSupported(schema.GroupResource{Resource: "PodMetrics"}, "DELETE"))
},
},
wantErr: false,
},
{
name: "not found error deleting all resources",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).
Return(kerrors.NewNotFound(schema.GroupResource{}, "u1"))
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).
Return(kerrors.NewNotFound(schema.GroupResource{}, "u2"))
},
},
wantErr: false,
},
{
name: "method not allowed error deleting all resources",
args: args{
objectsToRemove: toRemove,
},
fields: fields{
kubeClientCustomizer: func(kubeClient *kclient.MockClientInterface) {
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u1.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().GetGVRFromGVK(gomock.Eq(u2.GroupVersionKind())).Return(schema.GroupVersionResource{}, nil)
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u1.GetName()), gomock.Any(), gomock.Any()).
Return(kerrors.NewMethodNotSupported(schema.GroupResource{}, "DELETE"))
kubeClient.EXPECT().DeleteDynamicResource(gomock.Eq(u2.GetName()), gomock.Any(), gomock.Any()).
Return(kerrors.NewMethodNotSupported(schema.GroupResource{}, "DELETE"))
},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
kubeClient := kclient.NewMockClientInterface(ctrl)
if tt.fields.kubeClientCustomizer != nil {
tt.fields.kubeClientCustomizer(kubeClient)
}
a := Adapter{
kubeClient: kubeClient,
}
if err := a.deleteRemoteResources(tt.args.objectsToRemove); (err != nil) != tt.wantErr {
t.Errorf("deleteRemoteResources() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}