Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data providers #99

Open
lkd opened this issue Apr 28, 2021 · 4 comments · May be fixed by #127
Open

Data providers #99

lkd opened this issue Apr 28, 2021 · 4 comments · May be fixed by #127
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@lkd
Copy link

lkd commented Apr 28, 2021

Terraform Version

(latest Boundary provider)

Affected Resource(s)

(n/a)

Terraform Configuration Files

(n/a)

Expected Behavior

Read-only data providers would be nice. Modules would just need a provider "boundary"{} (which they would need any just to maintain boundary objects via TF) to figure out where to create boundary_host boundary_host_set and boundary_target objects; primarily the project scope in which they belong.

Actual Behavior

They don't exist.

The provider needs an input variable of a large map of port codes whose values are themselves maps that convey the correct boundary_scope .id and boundary_host_catalog .id

Important Factoids

Use case

My org has distributed, (mostly) loosely-coupled infrastructure-as-code for configuring a number of backing data stores, such as AWS RDS. Containing the setup of boundary resources to permit connectivity would be easier if those loosely-coupled IaC configs (really just one module) could use data providers to find the correct project scopes and host catalog, instead of being passed a very large map and selecting from this map. We are happy to trade 1 lookup for M if our CD pipeline gets simpler (doesn't need to go fetch and compose a map of existnig host catalogs and scopes before running)

@malnick malnick added the enhancement New feature or request label May 17, 2021
@malnick
Copy link
Contributor

malnick commented May 17, 2021

Thanks for opening this! This is on our road map. If you have suggestions on specific data resources that you have priorities around, we'd love to hear about those. Always open to PR's for this too!

@malnick malnick added the good first issue Good for newcomers label May 17, 2021
@malnick malnick self-assigned this May 17, 2021
@chuckyz
Copy link
Collaborator

chuckyz commented Jun 1, 2021

I could've sorely used a top-level scopes, scope w/ scope_id, and host-catalogs data provider, today I'm getting around this with high-level u_anon privileges (Note: Don't Do This ™️) in certain areas and terraform that looks like this...

data "http" "scopes" {
    url = "http://localhost:9200/v1/scopes"
}

data "http" "project" {
    url = "http://localhost:9200/v1/scopes?scope_id=${local.primary_scope_id}"
}

data "http" "host_catalogs" {
    url = "http://localhost:9200/v1/host-catalogs?scope_id=${local.database_project_id}"
}

locals {
    primary_scope_id = [for i in jsondecode(data.http.scopes.body).items : i.id if i.name == "primary"][0]
    database_project_id = [for i in jsondecode(data.http.project.body).items : i.id if i.name == "databases"][0]
    database_host_catalog_id = [for i in jsondecode(data.http.host_catalogs.body).items : i.id if i.name == "databases"][0]
}

lkd is also on-point with what else would be nice.

remilapeyre added a commit to remilapeyre/terraform-provider-boundary that referenced this issue Jul 5, 2021
This is an experiment to see whether generating the provider based on
the OpenAPI specification at https://github.com/hashicorp/boundary/blob/main/internal/gen/controller.swagger.json
could work.

The schema is converted from the definitions given in the document to
to map[string]*schema.Schema, with two special cases:
  - when there is an object in an object, I convert it to a one element
  list as terraform-plugin-sdk v2 does not know how to express this,
  - when there is an opaque attribute (`map[string]interface{}`), I
  skip it completely as terraform-plugin-sdk does not expose
  `DynamicPseudoType` that would make it possible to express this
  attribute in native Terraform, the workaround I use in the Consul
  provider is to use `schema.TypeString` and `jsonencode()` but it is not
  ideal. Since I only worked on the datasources here I chose to skip
  those attributes for now.

Once the schema is converted, we create the `ReadContext` function that
is needed for the datasource. As it can be a bit tricky to use the Go
client for each service, I chose to use directly the global *api.Client
and to manually add the query params and get the raw response. While it
would not be recommended for an external project to use the client this
way, it fits nicely here and keep the code simple. Finally the result is
written to the state, looking at the schema we generated previously to
convert it.

The tests are written manually so the developper can make sure that
everything is working as expected even thought the code was generated and
not written manually.

While the conversion of the schema could be made at runtime and only
one `ReadContext` function is actually needed, I find generating the code
make it quite easy to review and should make it easier for contributors
already accustomed to writing Terraform providers to look for errors or
fork the provider for their needs.

While I only worked on datasources returning lists of elements for now,
I think the same approach could be used to generate datasources returning
a single element and ultimately resources. This would make it very easy
to keep the Terraform provider in sync with new Boundary versions,
especially as the OpenAPI spec is created from the Protobuf files and
the CLI is already generated on a similar principle.

The code in generate_datasource.go is not very nice, but it does get the
job done. I may spin it off in its own project in the future to add more
feature to it.

Closes hashicorp#99
@micchickenburger
Copy link

I'm hoping to revive this issue. We have separate projects with their own CI pipelines that add their own resources to Boundary. To do this these projects need the IDs of the OIDC auth method and of the project scope to add resources to. Being able to use Boundary data providers would solve this issue. We have a workaround but it's painful: We add the OIDC auth method ID and project ID as TF_VAR variables in the CI pipelines, but every time we tear down the boundary infrastructure and rebuild it we have to update all these variables across a fair number of projects. I unfortunately don't have any experiencing writing in Go but am happy to help in any other ways.

@patoarvizu
Copy link

I commented this on a related ticket, but I want to repeat it here to hopefully have more visibility.

I would also be interested in a boundary_worker data source. The use case is to be able to discover the managed HCP workers to more easily configure multi-hop configurations.

moduli pushed a commit to remilapeyre/terraform-provider-boundary that referenced this issue May 7, 2024
This is an experiment to see whether generating the provider based on
the OpenAPI specification at https://github.com/hashicorp/boundary/blob/main/internal/gen/controller.swagger.json
could work.

The schema is converted from the definitions given in the document to
to map[string]*schema.Schema, with two special cases:
  - when there is an object in an object, I convert it to a one element
  list as terraform-plugin-sdk v2 does not know how to express this,
  - when there is an opaque attribute (`map[string]interface{}`), I
  skip it completely as terraform-plugin-sdk does not expose
  `DynamicPseudoType` that would make it possible to express this
  attribute in native Terraform, the workaround I use in the Consul
  provider is to use `schema.TypeString` and `jsonencode()` but it is not
  ideal. Since I only worked on the datasources here I chose to skip
  those attributes for now.

Once the schema is converted, we create the `ReadContext` function that
is needed for the datasource. As it can be a bit tricky to use the Go
client for each service, I chose to use directly the global *api.Client
and to manually add the query params and get the raw response. While it
would not be recommended for an external project to use the client this
way, it fits nicely here and keep the code simple. Finally the result is
written to the state, looking at the schema we generated previously to
convert it.

The tests are written manually so the developper can make sure that
everything is working as expected even thought the code was generated and
not written manually.

While the conversion of the schema could be made at runtime and only
one `ReadContext` function is actually needed, I find generating the code
make it quite easy to review and should make it easier for contributors
already accustomed to writing Terraform providers to look for errors or
fork the provider for their needs.

While I only worked on datasources returning lists of elements for now,
I think the same approach could be used to generate datasources returning
a single element and ultimately resources. This would make it very easy
to keep the Terraform provider in sync with new Boundary versions,
especially as the OpenAPI spec is created from the Protobuf files and
the CLI is already generated on a similar principle.

The code in generate_datasource.go is not very nice, but it does get the
job done. I may spin it off in its own project in the future to add more
feature to it.

Closes hashicorp#99
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants