Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gp3 storage_type support to aws_db_instance #441

Closed
crimean-celica opened this issue Nov 9, 2022 · 6 comments
Closed

Add gp3 storage_type support to aws_db_instance #441

crimean-celica opened this issue Nov 9, 2022 · 6 comments

Comments

@crimean-celica
Copy link

Is your request related to a new offering from AWS?

hashicorp/terraform-provider-aws#27702

@bryantbiggs
Copy link
Member

there isn't any changes required here - once the provider supports the new attribute value, you can just simply specify it

@YanRah
Copy link

YanRah commented Nov 10, 2022

the provider supports the new attribute value but i am getting that when apply.

Error: expected storage_type to be one of [standard gp2 io1], got gp3

│ with module.ef-core.module.db.module.db_instance.aws_db_instance.this[0],
│ on .terraform/modules/ef-core.db/modules/db_instance/main.tf line 42, in resource "aws_db_instance" "this":
│ 42: storage_type = var.storage_type

@antonbabenko
Copy link
Member

It is not yet supported by the Terraform AWS provider - hashicorp/terraform-provider-aws#27702 . There is nothing we can or should do about it in the module.

@klesher
Copy link

klesher commented Nov 14, 2022

there isn't any changes required here - once the provider supports the new attribute value, you can just simply specify it

With the new gp3 storage type, IOPS will apparently already be covered via the iops input used for io1 storage type. However, the volume's "Storage Throughput" will also now be a configurable option. Please see the terraform-provider-aws MR to implement gp3.

Also note that unlike EC2 gp3 EBS volumes the baseline performance changes from 3000 IOPS/125 MiBps to 12,000 IOPS/500 MiBps for volumes larger than 400 GiB. I'm assuming this sort of validation is preferred to be left to the upstream provider, but figured it's worth mentioning!

I likely won't have a need to modify storage throughput on any of my volumes, but wanted to at least give a heads up that I do believe at the minimum a new variable will be needed in the module (once upstream support is provided, of course).

@bryantbiggs
Copy link
Member

thank you for the details @klesher - however, this is exactly why we have the feature request template that calls out the relevant information thats necessary https://github.com/terraform-aws-modules/.github/blob/master/.github/ISSUE_TEMPLATE/feature_request.md#is-your-request-related-to-a-new-offering-from-aws

we should fill out the feature request template properly, we can reference any provider level PRs, and then once the functionality has landed in the provider we can look at implementation here

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 16, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants