Azure Blob storage adapter – BizTalk Server 2020

Hope you might be familiar with the newest version of Microsoft BizTalk Server, the brand new 2020 version, if not, no worries you can get to know it easily enough. BizTalk Server 2020 has been shipped with some nice features that extend the BizTalk platform to the cloud and enable you to do more hybrid integrations.

Some of the features which were released earlier as part of Feature Packs for BizTalk Server 2016, are also incorporated in the newest version of BizTalk Server.

Do you work frequently with Azure Blob Storage and would like to use them more in your integration flows? If the answer is yes, then you can leverage one of the new features of BizTalk Server 2020.

I am talking about the built-in Azure Blob Storage adapter for sending and receiving payloads to and from Azure Blob Storage.

Let’s see how this can be set up.

Prerequisites:

  1. Azure subscription, you can get one a trail subscription from https://azure.microsoft.com/en-us/free/
  2. A BizTalk 2020 environment, if your missing that too, just sign up for your trial subscription and fire up a Virtual Machine with the new version BizTalk Server 2020.
  3. Basic Knowledge about BizTalk.

In the versions before BizTalk 2020, if we need to work with Azure Blob Storage, we could use the WCF adapters and the available Endpoint Behavior called “Azure Storage Behavior”, you needed to deal with the configuration details like “defaultDataServiceVersion”, “defaultMaxDataServiceVersion”, “defaultXmsVersion” etc.

In the new version, you don’t need to occupy yourself with these configurations, in fact it’s pretty easy and straight forward to configure the usage of Azure Blob Storage.

Let’s see how we can do that with a simple integration flow.

Scenario 1

Receive files from Blob Storage:

  • Create a Receive Port for example “RcvFromAzureBlobStorage”
  • Add a receive location “RcvFromAzureBlobStorage_AZ” and select Transport Type as “AzureBlobStorage” and click on configure, Sign In with your azure account and select the Subscription (just in case if you have more than one) and choose the Resource Group, in my case its “HybridIntegrations”

  • Then go to “General” tab and choose your authentication type from the below:
  1. Shared Access Signature
  2. Access keys

If you want to authenticate with “Shared access signature”, then choose the option accordingly and enter the “Connection String” of the Storage Account, select the Blob Container Name, Blob Name prefix (in my case it was ‘List’) and Namespace for blob metadata, click Apply.

(Or else)

If you want to authenticate using the “Access Key”, then choose that option accordingly and select “Account” (in my case it is “hybridstorageaccnt”), that will automatically populate the Connection String, choose “Blob Container name”, “Blob name prefix” and enter the “Namespace for blob metadata”

  • Go to “Advanced” tab and configure “Polling Interval”, “Maximum messages per batch”, “Parallel download” and “Error threshold” as per your need and click OK.

FYI error threshold will check the number of errors (as set by you) and then disable the location once the threshold is crossed.

Testing:

Create a Send Port (easy one for testing is a file port) and set the Filter on the send port as ”BTS.ReceivePortName” == “YourRcvPortName”

Receive location will be polling according the polling interval specified and picks up all the files with the configured “prefix” and will be sent to the file location configure in your Send Port.

Scenario 2

Send files to Blob Storage:

  • Create a send port in my example it’s called “SndFilesToAzureBlobStorage_AZ”
  • Select transport type as “AzureBlobStorage”, click configure
  • Click Sign-In and Choose your subscription and resource group as shown earlier in Scenario 1
  • You can choose any one of the storage authentications either “Shared Access Signature” or “Access Key”

For Shared Access Signature:

Enter the connection string, select the “Blob container name”, enter the “Blob name” and “Namespace for blob metadata” can be used as a filter based upon which message context properties will be written to blob metadata if namespace of the property matches. In my case am not configuring the Field as I am doing the simple test scenario, if you want you can play around with it.

Or choose “Access Key” and configure it accordingly, the “Blob name” is required and should not have more than 1024 characters, in my case I’m using “%SourceFileName%”

  • In the “Advance” tab, you can select the “Blob type” (Block blob / Page blob / Append blob) based on your need and you can select the “Write mode” (Create new / Overwrite) and click “OK”.

Testing:

  • Create a Receive Port (in my case I selected an easy option i.e. File port).
  • Set the filter on the Send Port (which writes to Azure Blob Storage) to “BTS.ReceivePortName“ == “YourRcvPortName”

Once you drop the Files in the test input location, you send port will write them to the Blob Container in the Configured Storage Account/ Container Name.

Thank you for Reading!

Hope you might have got an idea about the in-built Azure Blob Storage Adapter in BizTalk Server 2020.

Keep Learning, it’s not always about how quickly we learn, its about how efficiently we learn.

FB(I) from BB(D)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: