-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Premium] Can't deploy Function App to storage account with VNET restriction #1361
Comments
Does restricting access to storage account after deployment of ARM template work? Or it will generate more errors during function deployment (az functionapp deployment....)? |
A bit of a longer answer to explain what the current state is today, and options for deploying code to an app with VNet restrictions. This is all assuming the Azure Functions Premium Plan which supports the regional VNet. The Premium Plan has VNet support which enables the function app itself to make secure connections to resources restricted to a private network. This includes making a call to a resource within a VNet (like a VM), or a service that has service endpoints enabled (like storage). Calls that are made during the execution of an app should work just fine with the out of the box VNet support. There are some layers outside of the function app itself (so outside of your "container" running the app, but on the underlying service infrastructure) that may also need access to a resource behind a VNet. One of the most common examples is for a trigger. If you wanted to trigger on a storage account for example, we have pieces of the service (called the event driven scale controller) that will monitor the storage account and scale based on the number of events. By default, that component runs outside of your app, but also needs access to your VNet. However, we have since added a feature that lets you toggle so that those checks happen within the context of your app. It's documented here. More or less how it works is our infrastructure just asks your app "hey can you check this storage account for me for events, it has service endpoints so I can't see it" and your app makes the secure call over VNet and returns the info. You can opt into that settings. That's a long intro of saying there are still a few other pieces of the service outside of your "app" Vm/container. One of them is the file system that hosts your code and then mounts that code to your "container." Today, we don't have a way for that file system to mount a source that is behind a VNet. The file system that has your code specifically is the All of the |
Thanks for taking this up @jeffhollan. However removing In the azure portal I see now:
When trying to deploy function from commandline (
Also, since we use this function as a scheduler (to start container every hour), another variable ( |
The solution worked great for us @jeffhollan - we enhanced our ARM provisioning Azure DevOps release pipelines to not include WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE . Then our vnet connected premium function is provisioned as expected. HOWEVER: Docs in https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings should be updated to reflect this issue (it also applies when you have a open VNET connected to your function app). It should clearly state the impact - it cost us a day of troubleshooting (it worked when I deployed from Visual Studio 2019, but not when we used our release pipelines) |
The only problem I see when removing the settings WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE is that the host keys are removed, even attempting to create keys will give you a bad request |
@jeffhollan - We're currently using
|
Had a similar issue recently during provisioning a logic app v2 with services restricted by VNET + Service Endpoints. The answer from @jeffhollan was really helpful to understand some of what was going on and certainly put me in the right direction. I'm provisioning all the infrastructure using Terraform and during that stage I'm restricting the newly created storage account to a VNET, creating a logic app and also associating it with the VNET. The logic app needs to configure itself with storage for WEBSITE_CONTENTAZUREFILECONNECTIONSTRING, WEBSITE_CONTENTSHARE and AzureWebJobsStorage - after provisioning all my dependencies, both the logic application and the kudu interface are unresponsive, returning 502\503s. . Behaviour seemed quite erratic here, I also experienced
As per #1349 (comment) I had WEBSITE_VNET_ROUTE_ALL = 1 as I needed to route all outbound traffic via a NAT gateway and I set WEBSITE_CONTENTOVERVNET = 1, to no avail. The content from Jeff's point and the 'Access to path' error message pointed me to the storage account file share, which had not been created. Therefore, rather than leaving this for the runtime to create the share, I added an explicit file share creation to our provisioning script, i.e. Then, pass the file share name to the logic app for provisioning, so sequence of steps becomes
|
@dylan-asos I also am facing this issue but for function apps. From what I read (More specifically from @jeffhollan comment) was that...
Please anyone confirm the statement above... If this is true, then how does what @dylan-asos did work? Or is it different for logic apps... I currently am trying the approach that was shown to work for logic apps and applying it to function apps and seeing if that works. Automation does the following...
After doing this, it was successful. FYI, for durability function app I had to create the queue private endpoint which it does list in documentation. |
@hecflores how did you get on? As per https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-vnet, one of the steps there is to create a file share, e.g. https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-vnet#create-a-file-share. The sequence of events listed there is essentially what I'm doing now. |
@dylan-asos I updated my comment. It was successful for me! :) |
@erwelch did you find any work around |
The suggestion to not define What finally worked was setting Note that docs warn that startup performance will be worse using URL instead of |
@jeffhollan any update on when this issue will be resolved? |
@jeffhollan bump |
Problem is still there even with manual creation of the resources, is there any update from MS when this going to be resolved, it seems as pretty old issue. |
So, when using Terraform to create the File Share, the subnet for the DevOps build agent (that is running as a Linux VM with access to the VNet) needs to be included in the Storage allowed subnets, right? Otherwise I get an authorization error when the check for the existence of the File Share is being made: Error: checking for existence of existing Storage Share "logicXXXXX-q01-content" (Account "stXXXXXq01" / Resource Group "RG-XXXXXX-Q01"): shares.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailure" Message="This request is not authorized to perform this operation.\nRequestId:94XXXXX00\nTime:2022-03-17T17:44:28.6416893Z" If that's the case then this is pretty unsatisfying since allowing the DevOps agents subnet would only be needed for creating this File Share. |
Following the list below worked for me as well:
|
Below is an ARM template which creates:
virtualNetworkRules
)AzureWebJobsStorage
andWEBSITE_CONTENTAZUREFILECONNECTIONSTRING
Resource 2 fails with
TemplateAndError.zip
I believe the Microsoft.Web ARM provider is attempting connection to the storage account and can't. (I enabled "Allow trusted Microsoft services to access this storage account" but that isn't helping.)
An obvious workaround is to take the ACL out of the storage account, and deploy it a second time later with the ACL applied.
Is there a cleaner workaround? It's a common practice for people to deploy the Function App and Storage Account all in the same template.
See also: https://github.com/Azure/azure-functions-ux/issues/2279
The text was updated successfully, but these errors were encountered: