-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating Cloud Functions' source code requires changing zip path #1938
Comments
So looks like you just need to update google provider :) |
Nope, #1781 doesn't really solve my issue. With #1781 we only gain the ability to update the functions source if I change at the same time the path of the zip archive in GCS. In my case I only want to change the content of the .zip blob in-place in GCS (which won't trigger an update of the function currently) and not update its location / path all the time. I can change the blob path dynamically by appending the content hash to it's path, but that's imho just an ugly workaround :).
|
Sadly, I think that may be the only option we have, at the moment. :/ I don't see anything in the Cloud Functions API that suggests you can trigger a new deployment without changing the path of the deployed code. In theory, we could probably add something on the Terraform side to make this nicer, like a I've opened an issue upstream about this. |
The REST API (https://cloud.google.com/functions/docs/reference/rest/v1/projects.locations.functions) seems to support versioning. Repeated attempts to deploy increment the returned Would it be possible to detect that the bucket object is being updated and re-deploy cloud functions which depend on it? |
Not with Terraform, unfortunately--the storage object resource's changes aren't visible to the function resource unless the field that changes is interpolated. A simple workaround that should work is changing [edit] Which was already pointed out. Oops. Sorry about that. |
This workaround is problematic because function's name cannot exceed 48 characters. |
FWIW, I would like to thumbs down this enhancement, I don't think this is a reasonable feature request, given its not clear what heuristic would be used to detect a change. Leaving it up to the user is probably more reliable. |
I believe serverless puts each zip file in a folder with the upload timestamp. Then it updates the functions source URL to trigger the redeploy. |
@timwsuqld I see what you mean, but I think that is just part of the learning curve of cloud functions. You have to do the exact same thing with lambda + cloudformation (except cloudformation does not provide a way to zip and fingerprint automatically the way that terraform does). Here are the issues with this that I see:
|
Isn't a |
Any update on this, issue has been open over a year. Workaround gets it going but it's hacky |
June 2020, I still encountered this issue 🙄 I used to hack it by adding the md5 sources as bucket file prefix such as Here is an example: data "archive_file" "function_archive" {
type = "zip"
source_dir = var.source_directory
output_path = "${path.root}/${var.bucket_archive_filepath}"
}
resource "google_storage_bucket_object" "archive" {
name = format("%s#%s", var.bucket_archive_filepath, data.archive_file.function_archive.output_md5)
bucket = var.bucket_name
source = data.archive_file.function_archive.output_path
content_disposition = "attachment"
content_encoding = "gzip"
content_type = "application/zip"
}
resource "google_cloudfunctions_function" "function" {
name = var.cloud_function_name
source_archive_bucket = google_storage_bucket_object.archive.bucket
source_archive_object = google_storage_bucket_object.archive.name
available_memory_mb = var.cloud_function_memory
trigger_http = var.cloud_function_trigger_http
entry_point = var.cloud_function_entry_point
service_account_email = var.cloud_function_service_account_email
runtime = var.cloud_function_runtime
timeout = var.cloud_function_timeout
} Hope that resource |
Yeah we do that too now, and even turned on lifecycle policy for the archive bucket so the old archives get deleted after 1 day! |
The only caveat here is some CIs (e.g. Google Cloud Build) mess up permissions on the files to be archived, so one may need to fix them before running this (we do that via an |
Tried with terraform locals {
timestamp = formatdate("YYMMDDhhmmss", timestamp())
func_name = "myFunc"
}
data "archive_file" "function_archive" {
type = "zip"
source_dir = "path/to/source-folder"
output_path = "${local.func_name}.zip"
}
resource "google_storage_bucket_object" "archive" {
name = "${local.func_name}_${local.timestamp}.zip"
bucket = var.bucket_name
source = data.archive_file.function_archive.output_path
}
resource "google_cloudfunctions_function" "function" {
name = "${local.func_name}"
description = "My function"
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = var.bucket_name
source_archive_object = google_storage_bucket_object.archive.name
trigger_http = true
timeout = 60
entry_point = "helloGET"
labels = {
my-label = "my-label-value"
}
environment_variables = {
MY_ENV_VAR = "my-env-var-value"
}
} |
@p4309027 I think |
I think another way of solving this is by using the random provider with a keeper: resource "random_string" "name" {
length = 8
special = false
upper = false
keepers = {
md5 = filemd5(var.package)
}
}
resource "google_storage_bucket" "bucket" {
name = var.bucket
location = var.region
}
resource "google_storage_bucket_object" "package" {
name = "${var.lambda}-${random_string.name.result}.zip"
bucket = google_storage_bucket.bucket.name
source = var.package
} |
Hi, I use this little workaround resource "google_cloudfunctions_function" "my-function" {
name = "my-function-${regex("(?:[a-zA-Z](?:[-_a-zA-Z0-9]{0,61}[a-zA-Z0-9])?)",
google_storage_bucket_object.my_bucket.md5hash)}"
...
} As the name of the cloud function force the creation of a new version, I just injected the md5 hash at the end of its name. |
* Remove final slash character * Fix update to function code triggers new rollout hashicorp/terraform-provider-google#1938
So, a stray thought on my part on how we might (emphasis might- I've got, like, 30% confidence here) be able to support this without hacking around with the function name or the random provider. First, the object replaces itself when the file on disk changes:
Second, we could consider adding a keeper field on the Cloud Function. With a field like |
I agree. We do this, it works fine. I wouldn't call it a hack |
Still no update on this? |
For those coming here via a search, it's worth noting that changing the name of the function has a drawback of also changing the http trigger URL used for a function triggered via http calls. Troublesome for slackbots, api endpoints, etc. You can trigger a redeploy of source by using the same dynamic md5hash in the zip file name ala:
Not sure if it matters from the standpoint of redeploying the function itself, but I also changed the description rather than the name of the function to match the hash just to help orient the casual observer to where the source might be.
|
You can use example:
this will print only the first 5 letters of the has string |
Note that per hashicorp/terraform-plugin-sdk#122 (comment), keepers appear to be coming to the |
* add terraform config * add gitignore logic to gcloudignore, since it's not applied otherwise * fix weird function build issue hashicorp/terraform-provider-google#1938 (comment) * update deploy pipeline * try remove md5 check * add secret * bump terraform timeout * check request * tidy up gcloudignore * try deleting gcloudignore * see what happens if i move gcloudignore to root * update docs * rename * refactor infra * start using tf vars * add dev pipeline * fix it * refactor env vars * baw * baw * get it working * fix backend config * update branches * implement functions framework * update accessors * Update function.tf * try adding firebase project * turn off function * list creds * comment out steps * set CF invoker to allAuthenticatedUsers * trigger redeploy * baw * baw * allow all users * update CI triggers
The workaround mentioned above need not serve purpose if I
a. zip the code folder.. b. upload with a fixed name to deployment bucket resource "google_storage_bucket_object" "code_zip_gcs" { c. upload it again with a dynamic name (overhead to have redundant copy, but this is just a workaround) resource "google_storage_bucket_object" "code_zip_gcs_latest" { d. deploy the app ( here I am deploying to GAE which has exactly same issue) /use the code_zip_gcs_latest resource which is the latest deployment appended with say timestamp/ |
Complementing @vibhoragarwal response: a. zip the code folder.. data "archive_file" "zip" {
type = "zip"
source_dir = "${var.root_dir}/src/functions/${var.function_name}"
output_path = "${var.root_dir}/assets/function-${var.function_name}.zip"
} b. upload with a fixed name to deployment bucket resource "google_storage_bucket_object" "source" {
name = "functions-${var.function_name}-source.zip"
bucket = var.artifact_bucket
source = data.archive_file.zip.output_path
} c. upload it again with a dynamic name (overhead to have redundant copy, but this is just a workaround) resource "google_storage_bucket_object" "latest_source" {
name = "${google_storage_bucket_object.source.name}-${google_storage_bucket_object.source.crc32c}.zip"
bucket = var.artifact_bucket
source = data.archive_file.zip.output_path
depends_on = [google_storage_bucket_object.source]
} d. deploy the app resource "google_cloudfunctions_function" "function" {
...
source_archive_bucket = var.artifact_bucket
source_archive_object = google_storage_bucket_object.latest_source.output_name
...
} Worked perfect for me. With the double Cloud Storage upload the Cloud Function isn't deployed every time I run |
With Terraform 1.2 I use replace_triggered_by as a workaraound
So everytime the sourcecode.zip is uploaded, the function will be replaced. |
The advantage is that we can keep the same archive name on the bucket. |
nice @mysolo! but there are more characters besides this seemed to work a bit better.
|
An alternative to the labels solution @mysolo mentioned above is to set a build time environment variable containing the SHA of the zip. The Variable can be named anything, it'll still trigger a re-deploy of the function. build_config {
runtime = "go120"
entry_point = "FileReceived"
environment_variables = {
# Causes a re-deploy of the function when the source changes
"SOURCE_SHA" = data.archive_file.src.output_sha
}
source {
storage_source {
bucket = google_storage_bucket.source_bucket.name
object = google_storage_bucket_object.src.name
}
}
} |
solution @rk295 suggested didn't work for me |
FWIW I got this working for resource "google_cloudfunctions2_function" "my_function" {
build_config {
source {
storage_source {
bucket = google_storage_bucket.my_bucket.name
object = google_storage_bucket_object.my_function_source.name
generation = tonumber(regex("generation=(\\d+)", google_storage_bucket_object.my_function_source.media_link)[0])
}
}
}
} |
The function redeploys only on the next apply after the bucket object has been updated. This doesn't allow for redeploying if the source code changes all on the same apply. I suppose this would work if the source code lives on the machine and not cloud storage but I'd love a solution involving cloud storage. EDIT: after deploying a few times in this fashion, it looks like the label gets updated but the source code does not. My workflow consists of zipping the source artifact, running terraform to load the artifact in cloud storage, than deploying the function from the console. Not a very efficient strategy. |
I have tried this but it doesn't seem to work for me unfortunately. In my case, the source code is not stored locally, and thus I'm not defining the data "google_storage_bucket_object" "zip_file" {
bucket = "my-bucket"
name = "cloud_functions.zip"
} and then referenced it in the Cloud Run Function resource as: ...
source {
storage_source {
bucket = data.google_storage_bucket_object.zip_file.bucket
object = data.google_storage_bucket_object.zip_file.name
generation = tonumber(regex("generation=(\\d+)", data.google_storage_bucket_object.zip_file.media_link)[0])
}
}
... But after making changes in the source code, and re-uploading the zipped file on GCS bucket, the source code on the Cloud Run Function doesn't seem to get updated. Any thoughts? |
if I understand this issue correctly, any workaround using |
As a workaround, you can use a data "archive_file" "source_code" {
type = "zip"
output_path = "src.zip"
source_dir = pathexpand(var.source_directory)
excludes = []
}
resource "google_storage_bucket_object" "source_code" {
source = data.archive_file.source_code.output_path
bucket = var.source_code_bucket
name = data.archive_file.source_code.output_path
}
data "google_storage_bucket_object" "source_code" {
bucket = google_storage_bucket_object.source_code.bucket
name = google_storage_bucket_object.source_code.name
}
resource "google_cloudfunctions2_function" "_" {
build_config {
source {
storage_source {
bucket = data.google_storage_bucket_object.source_code.bucket
object = data.google_storage_bucket_object.source_code.name
generation = data.google_storage_bucket_object.source_code.generation
... NOTE: this really should not be necessary. The resource itself should indicate that the generation is only "known after apply". |
Hi there,
I'm trying to create a cloud function via terraform (which in this particular example forwards error logs to slack, but that's irrelevant for the issue).
The problem is it seems impossible to update a cloud functions source code after its initial deployment via terraform.
As an example below is my hcl config code. You can see that as part of that code I'm packaging a node.js app located under
./app
into a zip file, upload it to GCS and then use this as source for the cloud function. Whenever I change something in the source code under./app
terraform will rezip and upload the new archive to GCS. However the corresponding cloud function does not reload the source code from GCS. This is because none of the input params of the cloud function resource has been changed. In the AWS lambda resource they use an attributesource_code_hash
to trigger updates to the function resource when the source code has changed.The google_cloud_function resource doesn't have any attribute like that so I cannot trigger an update to the resource. I tried embedding the hash into the description or labels of the resource to trigger an update, and while this creates a new version, that new version still doesn't reload the new source code.
IMHO that makes the current terraform cloud function resource useless in practice. It can only be used to create an initial cloud function but not for updates.
Expectation:
Please add an attribute
source_code_hash
or similar to the cloud function resource to allow updates of the source code via terraform.Terraform Version
Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
main.tf
b/249753001
The text was updated successfully, but these errors were encountered: