Skip to content

Commit

Permalink
Merge branch 'main' into issue-3791
Browse files Browse the repository at this point in the history
  • Loading branch information
840 authored Aug 16, 2024
2 parents 1052dc9 + e6e5d55 commit 88cdaf0
Show file tree
Hide file tree
Showing 43 changed files with 2,682 additions and 2,312 deletions.
66 changes: 66 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,71 @@
# Version changelog

## [Release] Release v1.50.0

### New Features and Improvements

* Added `databricks_notification_destination` resource ([#3820](https://github.com/databricks/terraform-provider-databricks/pull/3820)).
* Added support for `cloudflare_api_token` in `databricks_storage_credential` resource ([#3835](https://github.com/databricks/terraform-provider-databricks/pull/3835)).
* Add `active` attribute to `databricks_user` data source ([#3733](https://github.com/databricks/terraform-provider-databricks/pull/3733)).
* Add `workspace_path` attribute to `databricks_notebook` resource and data source ([#3885](https://github.com/databricks/terraform-provider-databricks/pull/3885)).
* Mark attributes as sensitive in `databricks_mlflow_webhook` ([#3825](https://github.com/databricks/terraform-provider-databricks/pull/3825)).


### Bug Fixes

* Automatically assign `IS_OWNER` permission to sql warehouse if not specified ([#3829](https://github.com/databricks/terraform-provider-databricks/pull/3829)).
* Corrected kms arn format in `data_aws_unity_catalog_policy` ([#3823](https://github.com/databricks/terraform-provider-databricks/pull/3823)).
* Fix crash when destroying `databricks_compliance_security_profile_workspace_setting` ([#3883](https://github.com/databricks/terraform-provider-databricks/pull/3883)).
* Fixed read method of `databricks_entitlements` resource ([#3858](https://github.com/databricks/terraform-provider-databricks/pull/3858)).
* Retry cluster update on "INVALID_STATE" ([#3890](https://github.com/databricks/terraform-provider-databricks/pull/3890)).
* Save Pipeline resource to state in addition to spec ([#3869](https://github.com/databricks/terraform-provider-databricks/pull/3869)).
* Tolerate `databricks_workspace_conf` deletion failures ([#3737](https://github.com/databricks/terraform-provider-databricks/pull/3737)).
* Update Go SDK ([#3826](https://github.com/databricks/terraform-provider-databricks/pull/3826)).
* cluster key update for `databricks_sql_table` should not force new ([#3824](https://github.com/databricks/terraform-provider-databricks/pull/3824)).
* reading `databricks_metastore_assignment` when importing resource ([#3827](https://github.com/databricks/terraform-provider-databricks/pull/3827)).


### Documentation

* Add troubleshooting instructions for `databricks OAuth is not supported for this host` error ([#3815](https://github.com/databricks/terraform-provider-databricks/pull/3815)).
* Clarify setting of permissions for workspace objects ([#3884](https://github.com/databricks/terraform-provider-databricks/pull/3884)).
* Document missing task attributes in `databricks_job` resource ([#3817](https://github.com/databricks/terraform-provider-databricks/pull/3817)).
* Fixed documentation for `databricks_schemas` data source and `databricks_metastore_assignment` resource ([#3851](https://github.com/databricks/terraform-provider-databricks/pull/3851)).
* clarified `spot_bid_max_price` option for `databricks_cluster` ([#3830](https://github.com/databricks/terraform-provider-databricks/pull/3830)).
* marked `databricks_sql_dashboard` as legacy ([#3836](https://github.com/databricks/terraform-provider-databricks/pull/3836)).


### Internal Changes

* Refactor exporter: split huge files into smaller ones ([#3870](https://github.com/databricks/terraform-provider-databricks/pull/3870)).
* Refactored `client.ClientForHost` to use Go SDK method ([#3735](https://github.com/databricks/terraform-provider-databricks/pull/3735)).
* Revert "Rewriting DLT pipelines using SDK" ([#3838](https://github.com/databricks/terraform-provider-databricks/pull/3838)).
* Rewrite DLT pipelines using SDK ([#3839](https://github.com/databricks/terraform-provider-databricks/pull/3839)).
* Rewriting DLT pipelines using SDK ([#3792](https://github.com/databricks/terraform-provider-databricks/pull/3792)).
* Update Go SDK ([#3808](https://github.com/databricks/terraform-provider-databricks/pull/3808)).
* refactored `databricks_mws_permission_assignment` to Go SDK ([#3831](https://github.com/databricks/terraform-provider-databricks/pull/3831)).


### Dependency Updates

* Bump databricks-sdk-go to 0.44.0 ([#3896](https://github.com/databricks/terraform-provider-databricks/pull/3896)).
* Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 ([#3775](https://github.com/databricks/terraform-provider-databricks/pull/3775)).


### Exporter

* Add retry on "Operation timed out" error ([#3897](https://github.com/databricks/terraform-provider-databricks/pull/3897)).
* Add support for Vector Search assets ([#3828](https://github.com/databricks/terraform-provider-databricks/pull/3828)).
* Add support for `databricks_notification_destination` ([#3861](https://github.com/databricks/terraform-provider-databricks/pull/3861)).
* Add support for `databricks_online_table` ([#3816](https://github.com/databricks/terraform-provider-databricks/pull/3816)).
* Don't export model serving endpoints with foundational models ([#3845](https://github.com/databricks/terraform-provider-databricks/pull/3845)).
* Fix generation of `autotermination_minutes = 0` ([#3881](https://github.com/databricks/terraform-provider-databricks/pull/3881)).
* Generate `databricks_workspace_binding` instead of legacy `databricks_catalog_workspace_binding` ([#3812](https://github.com/databricks/terraform-provider-databricks/pull/3812)).
* Ignore DLT pipelines deployed via DABs ([#3857](https://github.com/databricks/terraform-provider-databricks/pull/3857)).
* Improve exporting of `databricks_model_serving` ([#3821](https://github.com/databricks/terraform-provider-databricks/pull/3821)).
* Refactoring: remove legacy code ([#3864](https://github.com/databricks/terraform-provider-databricks/pull/3864)).


## 1.49.1

### Bug Fixes
Expand Down
25 changes: 0 additions & 25 deletions access/resource_ip_access_list_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ package access
// REST API: https://docs.databricks.com/dev-tools/api/latest/ip-access-list.html#operation/create-list

import (
"context"
"fmt"
"net/http"
"testing"
Expand All @@ -13,7 +12,6 @@ import (
"github.com/databricks/terraform-provider-databricks/qa"

"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

var (
Expand Down Expand Up @@ -311,26 +309,3 @@ func TestIPACLDelete_Error(t *testing.T) {
ID: TestingId,
}.ExpectError(t, "Something went wrong")
}

func TestListIpAccessLists(t *testing.T) {
client, server, err := qa.HttpFixtureClient(t, []qa.HTTPFixture{
{
Method: "GET",
Resource: "/api/2.0/ip-access-lists",
Response: map[string]any{},
},
})
require.NoError(t, err)

w, err := client.WorkspaceClient()
require.NoError(t, err)

defer server.Close()
require.NoError(t, err)

ctx := context.Background()
ipLists, err := w.IpAccessLists.Impl().List(ctx)

require.NoError(t, err)
assert.Equal(t, 0, len(ipLists.IpAccessLists))
}
25 changes: 0 additions & 25 deletions catalog/resource_volume_test.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
package catalog

import (
"context"
"fmt"
"net/http"
"testing"
Expand All @@ -10,7 +9,6 @@ import (
"github.com/databricks/terraform-provider-databricks/common"
"github.com/databricks/terraform-provider-databricks/qa"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func TestVolumesCornerCases(t *testing.T) {
Expand Down Expand Up @@ -740,26 +738,3 @@ func TestVolumeDelete_Error(t *testing.T) {
ID: "testCatalogName.testSchemaName.testName",
}.ExpectError(t, "Something went wrong")
}

func TestVolumesList(t *testing.T) {
client, server, err := qa.HttpFixtureClient(t, []qa.HTTPFixture{
{
Method: http.MethodGet,
Resource: "/api/2.1/unity-catalog/volumes?catalog_name=&schema_name=",
Response: map[string]any{},
},
})
require.NoError(t, err)

w, err := client.WorkspaceClient()
require.NoError(t, err)

defer server.Close()
require.NoError(t, err)

ctx := context.Background()
vLists, err := w.Volumes.Impl().List(ctx, catalog.ListVolumesRequest{})

require.NoError(t, err)
assert.Equal(t, 0, len(vLists.Volumes))
}
6 changes: 1 addition & 5 deletions clusters/data_spark_version.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,7 @@ func DataSourceSparkVersion() common.Resource {
return nil
}, func(s map[string]*schema.Schema) map[string]*schema.Schema {
common.CustomizeSchemaPath(s, "photon").SetDeprecated("Specify runtime_engine=\"PHOTON\" in the cluster configuration")
common.CustomizeSchemaPath(s).AddNewField("graviton", &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Deprecated: "Not required anymore - it's automatically enabled on the Graviton-based node types",
})
common.CustomizeSchemaPath(s, "graviton").SetDeprecated("Not required anymore - it's automatically enabled on the Graviton-based node types")
return s
})
}
19 changes: 18 additions & 1 deletion clusters/resource_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,18 @@ package clusters

import (
"context"
"errors"
"fmt"
"log"
"strings"
"time"

"github.com/hashicorp/go-cty/cty"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"

"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/terraform-provider-databricks/common"
"github.com/databricks/terraform-provider-databricks/libraries"
Expand Down Expand Up @@ -604,7 +607,21 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, c *commo
return err
}
cluster.ForceSendFields = []string{"NumWorkers"}
_, err = clusters.Edit(ctx, cluster)

err = retry.RetryContext(ctx, 15*time.Minute, func() *retry.RetryError {
_, err = clusters.Edit(ctx, cluster)
if err == nil {
return nil
}
var apiErr *apierr.APIError
// Only Running and Terminated clusters can be modified. In particular, autoscaling clusters cannot be modified
// while the resizing is ongoing. We retry in this case. Scaling can take several minutes.
if errors.As(err, &apiErr) && apiErr.ErrorCode == "INVALID_STATE" {
return retry.RetryableError(fmt.Errorf("cluster %s cannot be modified in its current state", clusterId))
}
return retry.NonRetryableError(err)
})

}
if err != nil {
return err
Expand Down
96 changes: 96 additions & 0 deletions clusters/resource_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -965,6 +965,102 @@ func TestResourceClusterUpdate(t *testing.T) {
assert.Equal(t, "abc", d.Id(), "Id should be the same as in reading")
}

func TestResourceClusterUpdate_WhileScaling(t *testing.T) {
d, err := qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
{
Method: "GET",
Resource: "/api/2.1/clusters/get?cluster_id=abc",
ReuseRequest: true,
Response: compute.ClusterDetails{
ClusterId: "abc",
NumWorkers: 100,
ClusterName: "Shared Autoscaling",
SparkVersion: "7.1-scala12",
NodeTypeId: "i3.xlarge",
AutoterminationMinutes: 15,
State: compute.StateRunning,
},
},
{
Method: "POST",
Resource: "/api/2.1/clusters/events",
ExpectedRequest: compute.GetEvents{
ClusterId: "abc",
Limit: 1,
Order: compute.GetEventsOrderDesc,
EventTypes: []compute.EventType{compute.EventTypePinned, compute.EventTypeUnpinned},
},
Response: compute.GetEventsResponse{
Events: []compute.ClusterEvent{},
TotalCount: 0,
},
},
{
Method: "POST",
Resource: "/api/2.1/clusters/start",
ExpectedRequest: compute.StartCluster{
ClusterId: "abc",
},
},
{
Method: "GET",
Resource: "/api/2.0/libraries/cluster-status?cluster_id=abc",
Response: compute.ClusterLibraryStatuses{
LibraryStatuses: []compute.LibraryFullStatus{},
},
},
{
Method: "POST",
Resource: "/api/2.1/clusters/edit",
ExpectedRequest: compute.ClusterDetails{
AutoterminationMinutes: 15,
ClusterId: "abc",
NumWorkers: 100,
ClusterName: "Shared Autoscaling",
SparkVersion: "7.1-scala12",
NodeTypeId: "i3.xlarge",
},
Response: common.APIErrorBody{
ErrorCode: "INVALID_STATE",
},
Status: 404,
},
{
Method: "POST",
Resource: "/api/2.1/clusters/edit",
ExpectedRequest: compute.ClusterDetails{
AutoterminationMinutes: 15,
ClusterId: "abc",
NumWorkers: 100,
ClusterName: "Shared Autoscaling",
SparkVersion: "7.1-scala12",
NodeTypeId: "i3.xlarge",
},
},
{
Method: "GET",
Resource: "/api/2.0/libraries/cluster-status?cluster_id=abc",
Response: compute.ClusterLibraryStatuses{
LibraryStatuses: []compute.LibraryFullStatus{},
},
},
},
ID: "abc",
Update: true,
Resource: ResourceCluster(),
State: map[string]any{
"autotermination_minutes": 15,
"cluster_name": "Shared Autoscaling",
"spark_version": "7.1-scala12",
"node_type_id": "i3.xlarge",
"num_workers": 100,
},
}.Apply(t)
assert.NoError(t, err)
assert.Equal(t, "abc", d.Id(), "Id should be the same as in reading")
}

func TestResourceClusterUpdateWithPinned(t *testing.T) {
d, err := qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
Expand Down
11 changes: 7 additions & 4 deletions common/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,12 @@ type cachedMe struct {
mu sync.Mutex
}

func newCachedMe(inner iam.CurrentUserService) *cachedMe {
return &cachedMe{
internalImpl: inner,
}
}

func (a *cachedMe) Me(ctx context.Context) (*iam.User, error) {
a.mu.Lock()
defer a.mu.Unlock()
Expand Down Expand Up @@ -60,10 +66,7 @@ func (c *DatabricksClient) WorkspaceClient() (*databricks.WorkspaceClient, error
if err != nil {
return nil, err
}
internalImpl := w.CurrentUser.Impl()
w.CurrentUser.WithImpl(&cachedMe{
internalImpl: internalImpl,
})
w.CurrentUser = newCachedMe(w.CurrentUser)
c.cachedWorkspaceClient = w
return w, nil
}
Expand Down
20 changes: 20 additions & 0 deletions common/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import (

"github.com/databricks/databricks-sdk-go/client"
"github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/service/iam"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
Expand Down Expand Up @@ -315,3 +316,22 @@ func TestGetJWTProperty_Authenticate_Fail(t *testing.T) {
assert.True(t, strings.HasPrefix(err.Error(),
"default auth: azure-cli: cannot get access token: This is just a failing script"))
}

type mockInternalUserService struct {
count int
}

func (m *mockInternalUserService) Me(ctx context.Context) (user *iam.User, err error) {
m.count++
return &iam.User{
UserName: "test",
}, nil
}

func TestCachedMe_Me_MakesSingleRequest(t *testing.T) {
mock := &mockInternalUserService{}
cm := newCachedMe(mock)
cm.Me(context.Background())
cm.Me(context.Background())
assert.Equal(t, 1, mock.count)
}
5 changes: 5 additions & 0 deletions common/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,11 @@ func MustInt64(s string) int64 {
return n
}

// GetInt64 returns the data for the given key and cast it to int64
func GetInt64(d *schema.ResourceData, key string) int64 {
return int64(d.Get(key).(int))
}

// Reads the file content from a given path
func ReadFileContent(source string) ([]byte, error) {
log.Printf("[INFO] Reading %s", source)
Expand Down
2 changes: 1 addition & 1 deletion common/version.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package common
import "context"

var (
version = "1.49.1"
version = "1.50.0"
// ResourceName is resource name without databricks_ prefix
ResourceName contextKey = 1
// Provider is the current instance of provider
Expand Down
1 change: 1 addition & 0 deletions docs/data-sources/notebook.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,4 @@ This data source exports the following attributes:
* `language` - notebook language
* `object_id` - notebook object ID
* `object_type` - notebook object type
* `workspace_path` - path on Workspace File System (WSFS) in form of `/Workspace` + `path`
Loading

0 comments on commit 88cdaf0

Please sign in to comment.