Azure Blob storage adapter – BizTalk Server 2020

Hope you might be familiar with the newest version of Microsoft BizTalk Server, the brand new 2020 version, if not, no worries you can get to know it easily enough. BizTalk Server 2020 has been shipped with some nice features that extend the BizTalk platform to the cloud and enable you to do more hybrid integrations.

Some of the features which were released earlier as part of Feature Packs for BizTalk Server 2016, are also incorporated in the newest version of BizTalk Server.

Do you work frequently with Azure Blob Storage and would like to use them more in your integration flows? If the answer is yes, then you can leverage one of the new features of BizTalk Server 2020.

I am talking about the built-in Azure Blob Storage adapter for sending and receiving payloads to and from Azure Blob Storage.

Let’s see how this can be set up.


  1. Azure subscription, you can get one a trail subscription from
  2. A BizTalk 2020 environment, if your missing that too, just sign up for your trial subscription and fire up a Virtual Machine with the new version BizTalk Server 2020.
  3. Basic Knowledge about BizTalk.

In the versions before BizTalk 2020, if we need to work with Azure Blob Storage, we could use the WCF adapters and the available Endpoint Behavior called “Azure Storage Behavior”, you needed to deal with the configuration details like “defaultDataServiceVersion”, “defaultMaxDataServiceVersion”, “defaultXmsVersion” etc.

In the new version, you don’t need to occupy yourself with these configurations, in fact it’s pretty easy and straight forward to configure the usage of Azure Blob Storage.

Let’s see how we can do that with a simple integration flow.

Scenario 1

Receive files from Blob Storage:

  • Create a Receive Port for example “RcvFromAzureBlobStorage”
  • Add a receive location “RcvFromAzureBlobStorage_AZ” and select Transport Type as “AzureBlobStorage” and click on configure, Sign In with your azure account and select the Subscription (just in case if you have more than one) and choose the Resource Group, in my case its “HybridIntegrations”

  • Then go to “General” tab and choose your authentication type from the below:
  1. Shared Access Signature
  2. Access keys

If you want to authenticate with “Shared access signature”, then choose the option accordingly and enter the “Connection String” of the Storage Account, select the Blob Container Name, Blob Name prefix (in my case it was ‘List’) and Namespace for blob metadata, click Apply.

(Or else)

If you want to authenticate using the “Access Key”, then choose that option accordingly and select “Account” (in my case it is “hybridstorageaccnt”), that will automatically populate the Connection String, choose “Blob Container name”, “Blob name prefix” and enter the “Namespace for blob metadata”

  • Go to “Advanced” tab and configure “Polling Interval”, “Maximum messages per batch”, “Parallel download” and “Error threshold” as per your need and click OK.

FYI error threshold will check the number of errors (as set by you) and then disable the location once the threshold is crossed.


Create a Send Port (easy one for testing is a file port) and set the Filter on the send port as ”BTS.ReceivePortName” == “YourRcvPortName”

Receive location will be polling according the polling interval specified and picks up all the files with the configured “prefix” and will be sent to the file location configure in your Send Port.

Scenario 2

Send files to Blob Storage:

  • Create a send port in my example it’s called “SndFilesToAzureBlobStorage_AZ”
  • Select transport type as “AzureBlobStorage”, click configure
  • Click Sign-In and Choose your subscription and resource group as shown earlier in Scenario 1
  • You can choose any one of the storage authentications either “Shared Access Signature” or “Access Key”

For Shared Access Signature:

Enter the connection string, select the “Blob container name”, enter the “Blob name” and “Namespace for blob metadata” can be used as a filter based upon which message context properties will be written to blob metadata if namespace of the property matches. In my case am not configuring the Field as I am doing the simple test scenario, if you want you can play around with it.

Or choose “Access Key” and configure it accordingly, the “Blob name” is required and should not have more than 1024 characters, in my case I’m using “%SourceFileName%”

  • In the “Advance” tab, you can select the “Blob type” (Block blob / Page blob / Append blob) based on your need and you can select the “Write mode” (Create new / Overwrite) and click “OK”.


  • Create a Receive Port (in my case I selected an easy option i.e. File port).
  • Set the filter on the Send Port (which writes to Azure Blob Storage) to “BTS.ReceivePortName“ == “YourRcvPortName”

Once you drop the Files in the test input location, you send port will write them to the Blob Container in the Configured Storage Account/ Container Name.

Thank you for Reading!

Hope you might have got an idea about the in-built Azure Blob Storage Adapter in BizTalk Server 2020.

Keep Learning, it’s not always about how quickly we learn, its about how efficiently we learn.

FB(I) from BB(D)

Using Azure Functions in C# and SendGrid to Create a Serverless solution for Emailing Reports

I found a great resource for building a serverless report server with azure functions that was using javascript so I thought I would make a version using C#.

The Problem

We have a task tracking application where we want task owners to be alerted about tasks that are due the next day, and managers to be alerted when a task is overdue. Alerts should be delivered through email everyday at 8:00 AM.


SendGrid Nuget Packages
System.Data.SqlClient Nuget Package


To simplify the query that we have to send from the Azure Function, we created two SQL views that will already give us the list of tasks due tomorrow and the tasks that are overdue.

 ... where (CONVERT(date, dbo.EmployeeTasks.DateToDo) = CONVERT(date, DATEADD(DAY, 1, GETDATE())))
CREATE VIEW OverdueTasks AS 
... where (CONVERT(date, dbo.EmployeeTasks.DateToDo) < CONVERT(date, GETDATE()))

The next step was to create the Azure Function. As this is a task we want to execute everyday, the type of Azure Function we should create is a Timer Trigger.

A thing to take note of is that the default for CRON expressions on the TimerTrigger are in UTC so if we would like a localized time, we'll need to add an app setting WEBSITE_TIME_ZONE to set this. The Microsoft Time Zone Index has a list of valid values for this setting.

WEBSITE_TIME_ZONE application setting to specify Time Zone to use

We create 2 helper classes to help us package the data from SQL for sending into SendGrid. These are basically the fields that we want to be able to pass into SendGrid's email template.

Helper classes for database views

Next we connect to the database and fetch the fields from the views we created and add them to the TaskList object. Note that the DateToDo is stored in the class as a string as there are not much formatting options when the data has been sent across to the SendGrid side. On creation, we already format this to a ShortDateString.

var tasklist = new TaskList();
var connectionString = Environment.GetEnvironmentVariable("dbConnection");
using (SqlConnection conn = new SqlConnection(connectionString)) {
    var query = @"select TaskName, GenericDescription, OwnerEmail, 
OwnerName, DateToDo, ManagerName, ManagerEmail, Name from 
    SqlCommand cmd = new SqlCommand(query, conn);
    using (SqlDataReader reader = cmd.ExecuteReader()) {
        if (reader.HasRows) {
            while (reader.Read()) {
                tasklist.Tasks.Add(new EmployeeTask {
                    TaskName = reader.GetString(0),
                    GenericDescription = reader.GetString(1),
                    OwnerEmail = reader.GetString(2),
                    OwnerName = reader.GetString(3),
                    DateToDo = reader.GetDateTime(4).ToShortDateString(),
                    ManagerName =reader.GetString(5),
                    ManagerEmail = reader.GetString(6),
                    Name = reader.GetString(7)

Now that we have the data, we can pass this along to send grid. Let's update the function signature to include a reference to the send grid message collector. The ApiKey attribute will be the name of the app setting that contains our actual send grid api key

The first thing we get is the template ID. This will be the ID of the SendGrid template to use for this batch of emails (more on this later).

We then group the tasks by task owners so that they get a consolidated list of tasks that are due tomorrow instead of receiving an email for each one.

For each grouped task, we create a SendGridMessage object with the corresponding template data and add this to the message collector. At the end of the processing, the SendGrid web job takes all the messages and sends them out.

var templateID = Environment.GetEnvironmentVariable("dueTomorrowTemplate");
var byBrickOnboarding = new EmailAddress("", "byBrick Onboarding");
var groupByTaskOwner = tasklist.Tasks.GroupBy(e => e.OwnerEmail);
foreach (var group in groupByTaskOwner) {
    var message = new SendGridMessage()
        TemplateId = templateID,
        From = byBrickOnboarding
    message.AddTo(group.Key, group.First().OwnerName);
    message.SetTemplateData(new TaskList { Tasks = group.ToList() });

    await messageCollector.AddAsync(message);

The last part is creating the SendGrid Template.

The SendGrid Dashboard will have a link to create Transactional Templates (1). Once we create our template, we will have the ID (2) which we set as an app setting for templateID in our function app.

We can design the template with code where we can provide test data and see in real time what the email will look like.

Test Data in JSON that maps to our data model from the function app.

Remember to replicate the app settings when deploying your function app so the right API key, DB Connection string and template IDs are available on the deployed function app.

The complete code to the function app is below

    public static async Task Run([TimerTrigger("0 30 10 * * *")]TimerInfo myTimer,
                                    ILogger log,
            [SendGrid(ApiKey = "SendGridAPI")] IAsyncCollector messageCollector)
        var status = "No Emails Sent.";
        var tasklist = new TaskList();
        var connectionString = Environment.GetEnvironmentVariable("dbConnection");
        using (SqlConnection conn = new SqlConnection(connectionString)) {

            var query = @"select TaskName, GenericDescription, OwnerEmail,
 OwnerName, DateToDo, ManagerName, ManagerEmail, Name from 
            SqlCommand cmd = new SqlCommand(query, conn);
            using (SqlDataReader reader = cmd.ExecuteReader()) {
                if (reader.HasRows) {
                    while (reader.Read()) {
                        tasklist.Tasks.Add(new EmployeeTask {
                            TaskName = reader.GetString(0),
                            GenericDescription = reader.GetString(1),
                            OwnerEmail = reader.GetString(2),
                            OwnerName = reader.GetString(3),
                            DateToDo = reader.GetDateTime(4).ToShortDateString(),
                            ManagerName =reader.GetString(5),
                            ManagerEmail = reader.GetString(6),
                            Name = reader.GetString(7)
        var template = Environment.GetEnvironmentVariable("dueTomorrowTemplate");
        var byBrickOnboarding = new EmailAddress("", "byBrick Onboarding");
        var groupByTaskOwner = tasklist.Tasks.GroupBy(e => e.OwnerEmail);
        if (tasklist.Tasks.Count()>0)
            status = @"{groupByTaskOwner.Count()} emails sent";

        foreach (var group in groupByTaskOwner) {
            var message = new SendGridMessage()
                Subject = "byOnboarding: Tasks Due Tomorrow",
                TemplateId = template,
                From = byBrickOnboarding
            message.AddTo(group.Key, group.First().OwnerName);
            message.SetTemplateData(new TaskList { Tasks = group.ToList() });

            await messageCollector.AddAsync(message);
        log.LogInformation(@"{status} for {DateTime.Now.ToShortDateString()}");

Using Azure REST interface with Postman and PowerShell Az module

I recently got the need to access Azure through its REST interface and thought that it was a good opportunity to acquaint myself with the fairly new PowerShell Az module. The Az module will replace AzureRM, introducing shorter commands, higher stability, and cross platform support. Now before rushing of and installing the Az module uninstall AzureRM first, installing both will render your system unusable and will be a pain to clean up.

My original motivation for this was accessing reports generated by the Azure API Management (APIM). This can be achieved by turning the Management REST API in the APIM instance and generating a SAS token, however that only allows you to access the older versions of the API reference. The Microsoft docs refers to the 2019-01-01 version of the REST API and to access that you need to use OAuth2 instead. In this post however I will only go so far as to list a resource group.

So let’s get cracking. First we need to login

> Connect-AzAccount

This will bring up a dialog to enter your Azure credentials and create a session. If you have multiple subscriptions tied to your account you want to set which subscription you are working on through the Set-AzContext

> Get-AzSubscriptions
 Name                                         Id                                   TenantId                             State
 ----                                         --                                   --------                             -----
 Visual Studio Enterprise – MPN               xxxxxxxx-xxxx-xxxx-xxxx-e95e0cf9d2e2 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Enabled
 byBrick Development Prod (Pay-As-You-Go)     xxxxxxxx-xxxx-xxxx-xxxx-c8c933d123a8 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Enabled
 byBrick Development Dev/Test (Pay as you go) xxxxxxxx-xxxx-xxxx-xxxx-4d668f535f61 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Enabled
> Set-AzContext -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxx-e95e0cf9d2e2"

In order to to get an OAuth2 token to interact with Azure we are going to create a Service Principal that can is kind of similar to a batch account in Windows. The service principal can be scoped to a specific region and given an appropriate role. In this example I’m first creating a resource group and then adding the service principal to that group with a Reader role. It’s important to store the result from the New-AzADServicePrincipal call as this contains a Secret, if this is not stored the service principal has to be reset to generate a new one.

> $rg = New-AzResourceGroup -Name example-rg -Location northeurope
> $sp = New-AzADServicePrincipal -DisplayName "example-sp" -Role Reader -Scope $rg.ResourceId
> $sp

 Secret                : System.Security.SecureString
 ServicePrincipalNames : {xxxxxxxx-xxxx-xxxx-xxxx-25360bb5073e, http://example-sp}
 ApplicationId         : xxxxxxxx-xxxx-xxxx-xxxx-25360bb5073e
 ObjectType            : ServicePrincipal
 DisplayName           : example-sp
 Id                    : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

The Secret is a System.Security.SecureString and we kind of have to know that secret in order for us to get our token; there are more or less hacky ways of achieving this, below is the way I took

> $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret)
> [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
> [System.Runtime.InteropServices.Marshal]::ZeroFreeBSTR($BSTR)

Now we have all information we need and it is time to break out Postman. The address, to request the token from, is<TenantId>/oauth2/token. This is a POST request, the body of the message should be form data and contain grant_type, client_id, client_secret, and optionally a resource

OAuth2 Token with Postman

the client_id is the ApplicationId of the service principal and the client_secret is the Secret we extracted from the same service principal.

Let’s finish up and see if our shiny new token actually works. To list all resource groups for a specific subscriptions we call the endpoint<SubscriptionId>/resourceGroups?api-version=2019-05-10 . To get the authorization for this call we set the authorization type to Bearer Token and paste in the access_token we got in the response to our last call

List Resource Groups

Since the service principal was only scoped to the resource group we created this will be the only group listed.

The REST API interface of Azure is plentiful and can be quite useful for extracting information from Azure components. An example of this is my original problem with fetching the reports from APIM, simply by adding the header Accept: test/csv to the REST request the response was formatted in CSV instead of JSON ready to use in Excel.

Below are some useful resources that related to this article

Gilded Rose TDD & Refactoring Kata

As part of my summer learning plan, I also wanted to practice refactoring and test driven development. I have been doing the Gilded Rose refactoring kata  for the past 3 days and I have to say that it is really a great way to practice. After every iteration, I noticed an improvement in the code that I was producing and also in the way that I arrive at the solution.

I put my code up on github but it was only in the second and third days that i started creating branches for the solutions that I got. In hindsight, I should have done this from the start as it is a good way to look back at the various ways  you come up with.

I recently finished reading Working Effectively with Legacy Code by Michae Feathers and so it was great to be able to put that knowledge to practice. The book makes for a great reference so I actually got the hardcopy as it would be much easier to flip back and forth than the kindle version.

The testing framework already being used by the project is XUnit so it was a chance for me to learn the framework as well. Apart from XUnit, I thought it would be a good opportunity to also start learning how to use fluent assertions.

The following is a timelapse of one of the iterations of the kata which I felt was fairly presentable.

My Approach

Before making any changes to the original code,  I wanted to make sure that I had some tests to verify that the current code is working as described and that I will not breaking anything.

I did have to modify program to be public in order to create an instance of it. I also created a constructor that would assign the Items property on Initialization as by specification, we are not supposed to touch the Item class or Item property.

I also added a GetItem() method to return the items so i can verify that the correct changes were made.  Because items is passed by reference, i should just be able to do the assertions on items instead of having to create the local variable changedItems.


Next was a simple conversion of the for loop into a foreach loop to make it easier to extract a method that I can start testing.


Visual studio has some great options for automatic refactoring which you can find when you press Ctrl+. on lines you want to refactor. I basically highlighted everything in the foreach loop and extracted that as a method.

Extract Method

No major changes yet so  the test should still run


I then created parameterized tests just for the method so i can easily add more cases. Looking at the spec and looking at the current sample items, it didn’t really test out the extremes like items that are already 50 or 0 and need to be degraded or upgraded in quality. I added a few more test cases so that we exhaust all possibilities. At this point, I left the conjured items to it’s old functionality and chose to deal with it once i have a better view of the code.

Testing  Update Item

At this Point I wanted to extract UpdateItemQuality into its own class. I did the same with the test associated with it as well to make it a little bit more organized.

Create Item Processor

Now UpdateItemQuality is still a huge and complex method which I didn’t quite want to attack just yet so I made some minor improvements like merging nested if statements using Visual Studio’s automated refactoring just to make sure that I don’t oversimply the statements and end up changing the program every so slightly.


Because I am being non-confrontational, I add the functionality to get a category for an item based on the item name. Again because  we want this to be TDD, we create the tests first then develop the methods.


Instead of having the ItemProcessor process all the different items, I wanted to make use of polymorphism so a more specific type of processor worries about the implementation depending on the category so I then created the tests (and corresponding classes for the different processors.


Once I isolated the items into their own update methods, it became fairly easy to refactor the UpdateItemQuality method by just removing the irrelevant cases for each item time.


Because I have my test with exhaustive cases, it gave me the confidence to make all the drastic changes. This also made it easier to then add the new functionality for conjured items once everything was refactored. See a timelapse of the full session below:

This post originally appeared on the coding hammock

INTEGRATE 2019 – Day 3 | byBrick Development

Wednesday 5th June 2019

The Final Day at Integrate 2019 !

09:00  Scripting a BizTalk Server Installation – Senior Premier field Engineer – Microsoft Azure

He started by explaining Why we must script the installation. He explain the concept by giving an example of serving a plate at the restaurant.


  • Streamlined environments
  • Remember all details
  • Repeatable execution
  • Less errors than manual installation

He also suggested what must you script

  • Things that can be controlled
  • Things that would not change
  • Good Candidates
    • Windows feature
    • Provision VM in Azure
    • BizTalk features and group configurations
    • MSDTC settings
    • Host and Host Instances
  • Bad Candidates
    • This that would change over time

He also suggested what must we look out for before we start  –

  • Set a time frame
  • A proper documentation of execution process. Scripting is not a replacement for documentation
  • Document your baseline 
  • Decide and standardize your developer machines, disaster recovery prep and test environments.

Good Practice –

  • Main code should orchestrate process
    • Create functions for task
  • Name scripts to show order to run them
  • Write a module for common functions
    • Logging, Prompts, Checking etc
  • Use a common timestamps for a generated file
  • Be moderate while error handling
    • Easy to spend a lot of time on un-likely errors
  • Debugging is a good friend

Common Issues –

  • Wrong binaries
  • Permissions to create Cluster resources.
    • No access rights in AD etc

While configuring the groups, replace tokens with values. Later he showed a quick demo to create a host and host instances using PowerShell scripts along with code walkthrough slides.

For more information you can view below Blog post or access the GitHub repository.

09:45 BizTalk Server Fast & Loud Part II : Optimizing BizTalk – Sandro Pereira – Microsoft MVP

This session was called part 2 because it was a continuation of his session which he delivered few years back at Integrate. He started of his session by take a real life example of Cars and using his components to compare it with Biztalk artifacts. Few as below 

  • Car chassis – BizTalk Server
  • Engine – SQL Server
  • Battery – Memory
  • Tiers – Hard Drive

He also gave inputs related to optimizing performance.

  • Choose right set of infrastructure for your BizTalk environment
  • How can we use queuing technique to process large amount of data
  • He also suggested how you can 1st observe and see how your environment behaves, analyze it and apply necessary fixes. Then repeat the same till the issue is fixed.
  • You can also redesign in case your exisiting BizTalk solution is causing a bottleneck.
  • Also Use minimum tracking to avoid database and disk performance issue.
  • He also showed SQL server memory configuration which helps in optimizing the message processing

He also did a walk-through of 2 real time scenarios and how he managed to improve the performances.

Various Solution to optimize performance –

  • Recycling BizTalk and IIS
  • Tune performance with configurations
  • SQL Affinity – Max no of memories
  • Tweaking MQ series polling intervals
  • Set Orchestration dehydration property

He finally ended the session by sharing his details and blog information

10:25 Changing the game with Serverless solutions – Michael Stephenson, Microsoft MVP – Azure

He started the session by giving a small introduction about himself along with the creation of Integration Playbook as a community which will provide integration architecture view on many technologies which make up the Microsoft Integration technology stack.

His entire session revolved around the application he build called Online store (Shopify). He showed and discussed on Serverless components used in his solution. He also said how he is using the Application Insights to gather various stats and how can he uses it to improve customer experience.

The building of  Shopify involves various components like – API Management, Power BI, Power Apps, SQL Azure DB, Service Bus, Azure Functions (Shopping Card Add, Shopping Cart Update, Order Add, Product Update).

He explained various stuff which he uses for this application –

  • Business Intelligence Platform ( SQL Azure DB, Cognitive Services, Power BI)
  • Integration Platform ( Service Bus, Functions, Logic Apps)
  • Communication & Collaboration ( Microsoft Teams, Bot Service, Microsoft QNA Maker)
  • Systems of Engagement ( Power Apps)
  • Product management & order fulfilment (Oberlo, Manual)
  • Marketing & Social Media ( Google AdWords, facebook)
  • Payment suppliers (PayPal, Stripe)

He later showed a quick demo on how is has implemented webhook in his solution. How he is using logic Apps to store most popular products searched or added to the cart. He later uses this information stored and displays the category of Most popular product on his site ( last 2 weeks).

He also said that for any customer, shipment of data and tracking is a key. He has implemented a web application tab which allows the customer to track the order.

He finally ended the session by sharing his thoughts on how can you build a  Serverless solution.

Thoughts –

  • With Azure a small business can build an enterprise capable online store
  • We can implement back office processes to support the business
  • The data platform let us gain the insights we want
  • The cost/capacity/scale can go from small and grow to very large
  • As a consultant I am seeing new business model and engagements with customers

11:35 Adventures of building a multi-tenant PaaS on Microsoft Azure – Tom Kerkhove, Azure Architect at Codit, Microsoft Azure MVP, Creator of Promitor

He started by giving a short introduction about himself. The presentation had around 80 slides but he managed to make us understand each and every point nicely. The 1st topic that he covered was related to Scale up and down and Scale In and out along with choosing the right compute infrastructure. ( As control increases, so does the complexity).

He provided some inputs related to scaling ( Serverless, PaaS, CPaaS).

Designing for scale with PaaS

  • Good ( define how it scales, scaling awareness)
  • Bad ( define how it scales, hard to determine perfect scaling rules)
  • Ugly (Be-aware of flapping, Be-aware of infinite scaling loops

Designing for scale with Serverless

  • Good ( The service handles scaling for you)
  • Bad ( The service handles scaling for you, doesn’t provide lots of awareness)
  • Ugly ( Dangerous to burn lot of money)

Designing scale with CPaaS

  • Good (Share resources across different teams, Serverless scaling capabilities are available with Virtual Kublet and Virtual Node)
  • Bad ( You are in charge of providing enough resources, Scaling can become complex
  • Ugly ( Take a lot of effort to ramp up on how to scale, there is lots to manage)

Later he shared few inputs related to Multi-tenancy and Choosing a sharding strategy along with determining tenants. More information ( )

 With regards to Monitoring he suggested that training the developers to use own tools and test automation would be a shared responsibility. ( Health checks, enrich your telemetry, Alerts handling, RCA). Related to consuming Webhooks  – Always route the webhooks through API gateway ( This decouples the webhook from your internal architecture).

The Lifecycle of a service (Embrace Change)

Private Preview (rough version of product> Public Preview (available to masses ) > General Available ( covered by SLA, supported version) > The End ( Deprecated, Silent sunsetting, Reincarnation in 2.0) He ended the session on a positive node with a quote Embrace the change – “Change is coming, so you’d better be prepared”.

12:25 Lowering the TCO of your Serverless solution with Serverless 360 Michael Stephenson, Microsoft MVP – Azure

He started the session by providing a reality of cloud support process. The main idea was to highlight how the entire support system works and who is responsible for what and how an un-skilled person can mess-up your solution. He later said that how Serverless360 can be used to assign a support team based on entities in a composite application.

He later showed a Service Map / Topology in Serverless 360 and how it can be useful. This topology determines or orchestrate your business application.

He later also showed the Atomic scope architecture and how BAM (atomic scope) is now embedded with Serverless 360. He later shared a quick demo related to BAM functionality using Logic Apps.

Data sources for BAM include ( Queue, Logic App, Function, API Management, Custom Code )

Key Feature to Democratize Support

  • I can visualize my Azure Estate and know what goes where
  • I can visualize how the app works
  • I can securely perform management actions
  • I can monitor to see if my service is operating properly
  • I can troubleshoot problems with individual transactions
  • I have least privilege access and auditing of who does what

13:45 Microsoft Integration, the Good the Bad and the Chicken way – Nino Crudele, Microsoft MVP – Azure

This session was full of energy and he started the session with some introduction about himself and sharing the news of being a Certified Ethical Hacker. He started the session by talking about good old days when he worked with BizTalk and some of his experience while moving to Azure. He thinks that BizTalk is still the best possible option for complex on-prem and hybrid scenario.

Rather than technology the real challenge is Azure Governance. Its everything and without it you cant even use Azure. Governance is everywhere, its all around us. Even Now.

My rule life is – You have only three ways to achieve a mission or task. Be Brave !

The Good

       The Bad

              The chicken way

He later spoke about Azure Scaffold Earlier and Now ( Resource Tags, Resource Groups, RBAC, Subscriptions, Resource Locks, Azure Automation, Azure Security standards etc ) along with Management group and policies.

Does exist a God in Azure Governance ? Answer is yes – Global admin. He is the one who restricts access to anyone or grants for a certain period of time. Use Privileged Identity Management.

There are lots of fancy tools available out in market which helps you to analyze the company statistics, come what may -but the business loves excel. Taking an example of Finance department, what are they really interested about – Totals Usage Quantity by Regions, Locations, Department.

Use Pricesheet ( from Azure portal to understand the price and negotiate for a discount from Microsoft ( not applicable to everything).

With regards to security he said that its good to have a dedicated team and resource handling it. Various tools which will help  – Burpsite, Nmap, Snort, Metasploit, Wireshark, Logstalgia). Network management is core and a good practice is to use Centralize firewall like Fortigate. Logstalgia helps us to analyze network traffic and how packets are travelling. Visualization of DDos attack is great.

He also showed a quick glance on how the Logstalgia (website access log visualization – ) works and how effective it is.  

A good naming standard is must and he also showed a tool which helps to set it. There are lots of options in Azure each has pros and cons. In case you are stuck with anything create a support ticket in case your org has Enterprise agreement (support is free – Technical + advisory).

Consider your Azure solution like your home and don’t trust anybody, there is always a possibility that someone could inject scripts ( Hashing = Integrity ) Any change detected should be alerted and execution must stop.

Documentation is a key. He said how can you utilize tools like Cloudockit (documentation for cloud architecture – ). He showed a tool he build which is freely available ( The Azure Multiverse Add-in for Office).

15:30 Creating a processing pipeline with Azure Function and AIS – Wagner Silveria, Microsoft MVP – Azure

The last session of Integrate 2019 and the 3 days passed so quickly. He started the session by give a quick introduction. Later he described a scenario and how the solution looked like year a go and how they updated the solution. Scenario – EDI data received from an API sent over to Big Data repository for reporting and mining.

He later showed what changed which included

  • Azure functions (edifact support via .Net package)
  • Azure storage ( claim check pattern to use Service Bus)
  • Application Insights

He later showed a quick demo related to it. How the Exception handling is taken care off.

Dead Letter Queue Management

  • Logic Apps polling subscriptions DLQ every 6 hours
  • Each subscriptions CDL could have its own logic
  • Email notifications
  • Error blob storage

1 year after if we talk about the present day with net technologies in place. What would be the possible candidates

  • Integration Service engine
  • Azure Durable Functions
  • Event Grid

Some of the important features that released in last year includes

  • Azure Functions Premium
  • Integrated support to key vault
  • Integrated support for MSI
  • Visual network support  and Service endpoints

Finally he summarized the session with below bullet points –

  • Look at various technology and options available
  • Watch out for operational cost
  • Road map of the components
  • Big picture and where your solution fits.

I would sum-up the highlights by saying it was plenty to learn and gain from Integrate 2019. Happy to be part of it.

INTEGRATE 2019 – Day 2 | byBrick Development

Tuesday 4th June 2019

The day 1 was packed with lots of information, let have a look at what day 2 had to offer.

08:30 – 5 tips for production ready Azure Functions – Alex Karcher, Program manager – Microsoft

The day 2 started off with a presentation from Alex Karcher where he shared five major tips which included,

  • Serverless APIs & HTTP (premium plan) scale
  • Event stream Processing and scaling
  • Options for Event Hubs scaling
  • Inner and Outer loop development ( Azure DevOps CI/CD)
  • Monitoring & Diagnostics 
    • Application Insights ( easy to integrate with Functions )
    • Distributed tracing
    • Application Map trace and diagram ( view dependencies)

He also spoke about the consumption plan and enabling Auto scaling in premium plan has more control than App service plan.

09:15 API management deep dive – Part 1 – Miao Jiang, Product Management – Microsoft

He spoke about the automation challenges with API management which included below 2 bullet points –

  • How to automate deployment of API’s into API management
  • How to migrate configurations from one environment to another

He suggested an approach how you can build a CI / CD pipeline using ARM template. He also showcased deployment of Food Truck application to API management.

Creator Tool – It was build to generate an ARM template so that you can be used to deploy it to API management

Extractor Tool – Which can be used to extract existing data from published API’s. He used VS Code and API management extension (private preview) to show case a demo. You can perform all the necessary operations event without moving to Azure portal.

Takeaways  –

  • Use separate service instance for environments
  • Developer and Consumption tiers are good choices for pre-production
  • Templates based approach is recommended
    • Consistent with the rest of Azure services
    • RBAC
    • Scalable
  • Modularizing templates provides wide degree of flexibility
    • Access control, governance, granular deployments

09:45 Event Grid Update – Bahram Banisadr, Program Manager – Microsoft

He started the session by speaking about Why is there a need for Event Grid and also explain the basic’s related to its working.

What’s new  –

  • Service Bus as an Event Handler (preview)
  • 1MB Events Support (preview)
  • IOT Hubs device telemetry events (preview)
  • GeoDR (GA, Generally available)
  • Advanced Filters (GA)
  • Events Domain (Bundling of topics)
    • 100,000 topics per Event Domain
    • 100 Event domain per Azure subscription

He also presented a Case Study on Azure Service Notification’s.

What’s Next ?

  • Remove work arounds
  • Greater transparency ( proper diagnose/debug functionality)

The session ended with a Quick demo on event grid.

10:45 Hacking Logic Apps – Derek LI , Program Manager – Microsoft | Shae Hurst, Engineer – Logic Apps Microsoft

Derek Li started the session by saying whatever that will be presented in the slides will be something audience would have never seen before. The session with announcing a new features in Logic App called =  Inline code ( public preview) Shae Hurst showed a quick demo how we can use inline code in our Logic apps and how easy it is to implement it. Demo – Extract list of email addresses using a regex code (JavaScript)

Execute JavaScript Code (public preview). More Language to come in near future.

VS Code for Logic Apps  –

  • Create or Add existing logic App
  • Automatic creation of ARM deployment template
  • Azure DevOps integration

Tips of Trade

  • Sliding window trigger
  • Run against older version

He also suggested to when should we go for Inline Code Vs Azure Functions – If you code takes longer execution time, more than 5 seconds use Azure Functions.

He also shared few tips on how to avoid 429s (throttling)

  • Use Singleton Logic App to call connector to avoid parallel branches/instances fighting over rate limits
  • Use different connectors per action/Logic App
  • Use multiple connections per action

What’s on Derek Mind ?

  • More Inline & VsCode
  • New Designer
  • Better Token picker
  • Emojis generator connector

11:30 API Management: New Developer portal – Mike Budzynski, Product Manager Microsoft

This was the session where he shared the news about new developer portal which will available for all the users on coming Wednesday (12th June 2019). He gave a quick intro about the portal which is either used by the API Providers (design, content editors etc) or the API consumers (app developers).

Some key points to take note off –

  • They Portal is built from scratch
  • Technology used – JAM stack ( JavaScript, API’s, markup)
  • It has a modern look and feel
  • Its Open source and DevOps friendly

He also showed a quick demo of the new developer portal. The Look and feel resembled with the Integrate 2019 website.

You can get more information related to the developer portal on the link below

12:00 Lunch

13:00 Making Azure Integration Services Real – Matthew Farmer, Senior Program Manager – Microsoft

By 2022 65% of the organizations will move to Hybrid Integrations. There are 4 different integration scenarios 

  • Application to Application
  • Business to Business
  • SaaS
  • IOT

Each of them have different integration challenges and he quoted few of them which we may face working with these integrations. ( Different Interfaces, Cloud or On-prem, Service oriented, distributed etc).

IPaaS – Below are the 4 key integration components for building a solution

  • API’s
  • Workflows
  • Messages
  • Events

He also showed a quick demo related to Processing Orders and also showed how the all integrate and work together. (Basic Enterprise integration on Azure)

Free Whitepaper (Azure Integration Services)

Few Slides where he spoke about BizTalk to Azure Integration Services

  • There isn’t 1-1 mapping
  • Adopting cloud paradigm requires a different approach
  • Many new concepts to take advantage of
    • Connectors
    • API Economy
    • Serverless
    • Reactive code
  • Many Assets can be transformed from BizTalk to Logic Apps easily 
    • Schemas
    • Maps
    • EDI Agreements

All of the above can easily be moved to Log App Integration Account

  • Orchestration and Pipelines can be remodeled in Logic Apps

Tricky things that is hard to move,

  • BizTalk implementations with huge code bases
  • Lots of rules engine (sometimes)

Buy in to the vision of Azure Integration Services

  • API Economy
  • Logic Apps as application ‘glue’
  • Serverless – or dedicated look at integration services
  • Pay as you use

Identify use case, Design a target architecture and Create a migration plan.

Making it Real

  • Understand the principles
  • Strategy over tactics, value over cost
  • Build a migration plan
  • Don’t under govern
  • Don’t over govern
  • Think about culture change

13:40 Azure Logic Apps Vs Microsoft Flow, why not both ? Kent Weare, Microsoft MVP – Business Application

The session started off with explanation with regards to Microsoft Flow features.

  • ISaaS (Integration Software as a Service)
  • Azure Subscription not required
  • License entitlement available through Dynamics 365 and Office 365
    • Additional standalone license available
  • Part of power platform (Power Apps, Power BI, Microsoft Flow)
    • Deep Integration with PowerApps
  • Over 275+ available connectors.
  • Custom Connectors, Standard and Premium (P1 required)
  • Cloud and On-Premise
  • Approvals
    • Authenticated
    • Tracked in Common data service
    • Custom Approval options
    • Can respond from
      • Flow Approval center
      • Email
      • Flow Mobile application
      • Microsoft Teams
    • Graduate flows to Logic Apps with few conditionals apply

Later he jumped to describe some features for Logic Apps

  • IPaaS (Integration Platform as a Service)
  • Azure Subscription required
  • Consumption based billing
  • Part of Azure Integration Services platform (API Management, Service Bus, Event Grid)
  • Over 275 + available connectors
  • Around 95% symmetric between Flows and Logic Apps
  • Enterprise connectors ( SAP, IBM MQ, AS2, EDI EDIFACT)
  • 3rd party custom connectors + custom connectors
  • ISE – Vnet support
  • Cloud and On-premise
  • Editing Experience
    • Web Browser
    • Visual Studio 2017/2019
      • Azure Logic App tools for Visual Studio 2017/2019
      • Enterprise Integration Pack
    • Visual Code
    • Continuous Integration / Continuous Deployment
      • Azure DevOps
    • Integration Pack
      • Integration Accounts
      • Typed Schemas
      • Flat File encoding/decoding
      • XML Transformation
        • BizTalk tooling
      • JSON Transformation
        • Liquid
      • Third Party management
        • Partners and Agreements
    • ISE (Integration Service Enviornment)
      • VNet connectivity
      • Private static outbound IPs
      • Custom inbound domain names
      • Dedicated compute & Isolated storage
      • Scale In/Out capabilities
      • Flat Cost – whether you use it or not
      • Extended limits
    • Monitoring
      • Webhook integration with Logic Apps for event orchestration
      • 3rd Party support for Serverless360

As we can see there are lot of shared capabilities between Microsoft Flows and Azure Logic Apps. There are also some subtle differences between the two, but they can play a significant role in determining which is the best tool for the job. Ultimately, tooling should be selected based upon Organization design and the complexity of the requirements.

The Winner is …

  • The Organization which leverages both tool to address their needs
  • Innovation doesn’t happen while waiting in line
  • Implement governance and education that allows your business to scale

He later shared a link to his blog post  along with links to Middleware Friday and Serverless Notes

14:20 Your Azure Serverless Applications Management and Monitoring simplified using Serverless360 – Sarvana Kumar, Founder – Serverless360 / Microsoft MVP – Azure

He started off by sharing an Agenda for his presentation

  • Management of your serverless Apps
  • Improving DevOps for your serverless Apps
  • End-to-End tracking for your serverless Apps
  • Customer Scenarios

What is Serverless App ? Similar to LEGO blocks

Azure Logic Apps, Azure Functions, Azure APIM, Azure Service Bus, Azure Relays, Azure Event Grid, Azure Event Hub, Azure Storage – Queue, table, files, Azure Web Apps, Azure SQL Database

He later showed few examples of Serverless Apps along with what problems we face while managing these Apps

  • No Visibility and hard to manage
  • Complex to diagnose and troubleshoot
  • Hard to secure and monitor 

We see similar problem in modern application which he explained by making us understand the Lifecycle of BizTalk Application. With great power comes great responsibilities, what are the best solution to manage your serverless apps ?

  • Composite Applications and Hierarchical Grouping
  • Service Map to understand the architecture
  • Security and Monitoring under the context of Application

He later should and navigate through product Serverless360

Devops Improvements

  • Templated entity creation
  • Auto process left over messages
  • Auto process dead letter messages
  • Remove storage blobs on condition
  • Replicate QA to Production
  • Detect and Auto correct entity states

BAM (End-to-End Tracking) was also demonstrated during the session. He also showed how we can use the functionality and use the serverless360 connector in our Logic Apps for tracking purpose.

The session end with 3 different customer scenarios who are currently using Serverless360. You can book a demo to know more about the product on –

15:30 Monitoring Cloud and Hybrid Integration Solution Challenges Steef-Jan Wiggers, Microsoft MVP – Azure

He started the session by speaking Cloud native integration solutions – AI. He also spoke about the challenges we faced while creating  either a hybrid solution or cloud solutions. He also showcased a real time scenario related to Hybrid Integration.

He also spoke about different types of Monitoring and challenges faced.

  • Health Monitoring
  • Availability Monitoring
  • Performance Monitoring
  • Security Monitoring
  • SLA Monitoring
  • Auditing
  • Usage Monitoring
  • Application Logs
  • Business Monitoring
  • Reporting

How users can use different Azure Artifacts to perform Monitoring activity ( Azure Monitor, Log Analytics, Application Insights,  Power BI , configure Alerts.

How can people build an effective hybrid integration scenarios ( Training, Hands-on-labs, Learn through mistakes, Knowledge base, Forums, Guidance, Mentoring, Exams (AZ-103, AZ-900). When the Solution is in place, you must have a strong Support Model and well define processes.



There are different products available in market and based on the requirement select the right and perfect tools which suites your need. Few tools are,

  • Serverless360
  • Biztalk360
  • Atomic Scope
  • Invictus Framework (BizTalk, AZURE)
  • AIMS
  • NewRelic
  • DynaTrace and the list goes on…

There are also different ways of Monitoring your Services in Azure. He also made a walkthrough of Cloud Native Solution – Order Processing revisited and gave inputs on which Products or Services to be used.

Finally he ended the session by share few Resources for learning.

  • Azure Administrator AZ-103
  • Azure Cost Management
  • Microsoft Azure Monitoring
  • Codit Invictus for Azure and BizTalk
  • Codit D365 Whitepaper
  • Serverless360 Blog
  • ServerlessNotes
  • MiddlewareFriday

16:10 Modernizing Integrations – Richard Seroter, Microsoft MVP Azure

He started the session by going back to year 2009 where he wrote a book called – SOA Patterns with BizTalk Server 2009.

Modernization is a spectrum

Various tools which follows this spectrum include BizTalk, SSIS and Azure Service Bus. Later he spoke about few concepts related to integration and what would have been his take based on below

  1. My advice in 2009
  2. My advice in 2019
  3. Benefits of 2019 advice
  4. Risks with 2019 advice 

Content based routing

  1. BizTalk Server with send port subscriptions
  2. Use BizTalk Server on-premise and Service Bus and Logic Apps for cloud based routing
  3. You message engine is scalable and flexible
  4. Explicit property promotion needed for Service Bus or you need Logic Apps to parse the messages. Cloud based rules are not centralized

Later he shared links to some blogs 

De-Batching from a database

  1. Configure in the BizTalk SQL Adapter and de-batch payload in receive pipeline
  2. For bulk data, de-batch in a Logic App. Switch to real-time, event-driven change feeds where possible
  3. With change feeds, process data faster, with less engine based magic
  4. De-batching requires orchestration (LogicApps) versus pipeline-based de-batching. Can be a more manual setup.

This was a moment when my Blog was shared and it made my Day !

Stateful Workflow with correlation

  1. Use Orchestration and take advantage of dehydration, correlation and transaction with compensation
  2. Use durable functions for long running sequence along with Logic Apps and Service Bus apart giant orchestrations into choreographed sequences
  3. Easier for any Developers to build workflow
  4. You may come across limits in how long a workflow can “wait” and there is centralized coordination and observability

Complex data transformation

  1. Use the BizTalk mapper in transformation data structures and take advantage of functoids and inline code
  2. Map data on the way out of it all, and use Liquid templates for transformation but not business logic. Also consider transforming in code (functions)
  3. Avoid embedded too much brittle logic within a map and leave it up to receivers to handle data structures changes
  4. Not suitable for flat files or extremely difficult transformations. Put new responsibilities on client consumers

Integration with cloud endpoints

  1. Call cloud endpoint using http adapter and custom pipeline components for credentials or special formatting
  2. Use Logic Apps and connectors for integration with public cloud services. Use Logic Apps adapter for BizTalk where needed
  3. Any developer can integrate with cloud endpoints and you have more maintainable integrations
  4. More components from more platforms participating in an integration

Strangling your legacy ESB

  1. Put new Integrations into the new system, and rebuid existing ones over time
  2. Similar to 2009, but avoid modernizing to a single environment or instance. Use Event storming to find seams to crave out
  3. Get into managed systems that offload operational cost and are inviting to more developers
  4. You will have a lengthy period of dual costs and skillsets

Getting Integrations into Productions

  1. Package up BizTalk assemblies, libraries, scripts and policies into MSI and deploy carefully.
  2. Put On-premise and Cloud Apps onto continuous integration and delivery pipelines. Aim for Zero downtime deploys
  3. Reduce downtime, improve delivery velocity and reliability. Introduce automation that replace human intervention
  4. Complicated to setup with multi-component integrations. Risk of data loss or ordering anomalies when upgrade rollouts.

Building Integrations Team

  1. Invest in training and building center of excellence
  2. Integration experts should coach and mentor developers who use variety of platform to connect systems together
  3. Fewer bottlenecks waiting for Integration team to engage and more types of simple integration get deployed
  4. More distributed ownership and less visibility into all integrations within the company

16:50 Cloud Architecture Recipes for the Enterprise – Eldert Grootenboer, Microsoft MVP Azure 

The final session for the day, Eldert started by explaining how on-prem infrastructure has to be managed by the enterprise as compared to the cloud IaaS you don’t have to worry about the infrastructure (patches, OS updates etc) all will be taken care by the Microsoft once your environment is in place. He also discussed how serverless can be useful where your primary focus is on what business needs and nothing else.

  • What is the right size of server for my business needs ?
  • How can I increase server utilization ?
  • How many servers do I need ?
  • How can I scale my app ?

He also suggested before we implement any solution we must draw a line and set guidelines.

  • What must be a desired architecture 
  • Have a proper involvement from all the teams (Business units, architect and everyone involved) so that all are on the same page.
  • Look at various options available and utilize it rather than building your own. In case you do go that path try to use PaaS offering.
  • Other key things to look our for included 
    • Event Driven approach
    • Scalability
    • Loosely coupled solution
    • Integration patterns
    • DevOps strategy
    • Middleware
  • Look out for something what suites your needs and do buy something which is hyped or attractive
  • Your environment must be Secure and monitored.

Explore Azure components while deciding on Architecture (Serverless – LogicApps, Event Grid, Functions ) Containers and AppInsights for monitoring purposes. There are endless possibilities to choose from. He also shared few of his customer experiences.

Cloud native start from PaaS  and DevOps is also something to think of.  Few take away from the session –

  • Understand and make sure all the scenarios are captured
  • Have a good governance and security model
  • Look out for endless possibilities available out there
  • After understanding your needs consider a cost effective method.

You can check more insights on Day 3 here.

INTEGRATE 2019 – Day 1 | byBrick Development

I would like to pen down my experience, updates and take away for this year’s integrate 2019 (London). Being the 1st timer there was so much to gain from Integrate 2019. I would be dividing this article into span of 3 days.

The Event was organized at etc. Venues, London 3rd – 5th June 2019. As soon as you enter the event there was a big board which said – WELCOME INTEGRATE 2019. The Registration started at 07:30 and it was pretty well organized. You were handed over your Integrate attendee batch with a bag which had a book pen and few pamphlets along with the Agenda for the next 3 days. Breakfast and Lunch was provide each day during the Integrate 2019.

Registration at Integrate 2019

There were also Booths from the sponsors & organizers – ( BizTalk360, Serverless360, Quibiq, Codit, Hubfly ).

Sponsors at Integrate 2019

The Event was almost houseful with around 500 attendees. To talk about the speakers each one had a different way of presenting and I thoroughly enjoyed each and everyone of them. Though I would not lie, after Lunch break at times it became a bit hard to concentrate but somehow managed to get over it.

Speakers at Integrate 2019

Monday 3rd June 2019

08:45 – Integrate 2019 – Welcome Speech Saravana Kumar, Founder/CEO Biztalk 360, Serverless360, Atomic Scope.

Being the 8th Anniversary of Integrate, a welcome speech was presented by Sarvanna where he welcomed all the attendees, speakers and sponsors. He introduced their new identity named “” – with a quick info about their products (BizTalk360, Serverless360, Atomic Scope). He also gave inputs related to  Integration Monday and Middleware Friday which is run by the Microsoft community. Also few links where you can explore and learn more about Azure. You can also use the hashtag #integrate2019 to see inputs from various attendees, speaker and organizers.

09:00 – Keynote, Jon Fancey, Group Principal PM Manager – Microsoft.

A speech “Beyond Integration” was presented by Jon Fancey. He spoke about what was their Vison 2015 and how they wanted to be the ruler in iPaaS space. The iPaaS platform today has more than 300+ connectors ( Flows, LogicApps). It was year 2017 when Microsoft iPaaS offering was listed in Gartner list and continues this year as well. If I heard it correctly it was quoted as leader of 2018 by Gartner. He quoted Gartner – “The more you innovate, the more you need to Integrate”. Later he invited 3 customers who use Microsoft platform for their Integration transformation.

09:10 – Ramak Robinson, Area Architect Integration at H&M

She spoke about its Vision – “Most Loved Design in the World”. She also spoke about its technology ambition, Integration Competence center at H&M and their cloud transition journey. She also presented a case study which they are building in group along with Microsoft – Digital Receipts. 

09:20 – Daniel Ferreira, Sr. Cyber-Security Data Scientist, Shell

He spoke about the operational challenges and how they are using API integration to enable their Cyber security operations. He also provide a quick demo on chat bot functionality.

 09:40 – Vibhor Mathur, Lead Architect, Anglo-Gulf Trade Bank

He spoke about how they were on a mission to build the 1st  Digital trade finance bank in only 6 months. He spoke about various Microsoft offerings they used like LogicApps, Service Bus, API Management, Azure AD to achieve their target. They waned to build an architecture which was lean, secure, easy to upgrade and highly available. Where they able to setup a digital bank in a span of 6 months ? The Answer is YES, but it was 18 days more than expected, impressive indeed (6 months, 18 days).

All the above 3 industry ( Energy \ Retail \ Finance ) uses Microsoft as their Cloud Platform.

10:00 – Logic Apps Update – Kevin Lam, Principal Program Manager

He shared some interesting updates related to logic app’s. In Last 6 months there  has been 38 new connectors added in Logic Apps – (Microsoft + 3rd Party). Logic Apps is growing and it also allows you to create your own custom connectors and publish it to market place. He gave an overview about how to Integrate using Logic Apps which included – Orchestration, Message Handling, Monitoring, Security etc.

You can use Visual Studio 2019 to deploy your Logic App direct to Azure Portal. You can also use Visual studio code to deploy your Logic App via Arm template. He also later spoke about the ISE Architecture ( Integrated Service Environment ), ISE Deployment model and ISE roadmap.

11:15 – API Management Update – Vlad Vinogradsky – Product Leader, Microsoft

He started off with what we can use and what are they working on in the field of API management. He gave an overview of API management and spoke about recent developments related to API management along with Demo on Hosting kubernetes on your machine.

  • Manage Identities – Authenticate/Authorize your service. Enable it so that you can authenticate it with your backend.
  • Policies – A support towards encrypted documents that can perform simple and advance validations.
  • Protocol settings. Enable TLS
  • Bring your own cache (Redis compatible)
  • Subscriptions – Enable tracing on Keys
  • Observability –  Set same configuration settings for both Azure Monitor and App Insights, preserve resources by turning on sampling, enable/disable settings for whole API tenant, you can also specify additional headers to log etc.
  • Function + API management – Import function and push function as API
  • Consumption tier – GA last week ( billed per execution and can scale down to zero when there is no traffic ).
  • DevOps resource tool kit – You can have a view at it on GitHub

Future –

  • He told us a bit about the New Developer poral, more about it was revealed on 4th June 2019.
  • With API Management being cloud only service and most of the customers uses both cloud and on-prem (Hybrid) environments. Microsoft will soon be launching Self-hosted API management gateway allowing you to deploy your gateway component  on-prem. (Launching late summer or early fall).

Also link was shared across for all the API lovers, Azure API Management resources  –

12:00 – The Value of Hybrid Integration – Paul Larsen, Principal Program Manager, Microsoft

This was the session which most of the BizTalk developers eagerly waiting for, This year end 2019 a new BizTalk 2020 will be launched. You can refer to the following to know more about its latest features.

He also shared information related to API Managements, Logic Apps, Service Bus and Event Grid. Later he  announced  a new connector for Logic Apps IBM 3270 (preview). A Hybrid demo related to – Integrate IBM mainframe program with Azure Cloud was also showcased using Logic Apps 3270 connector and later future Roadmap.

12:30 – Lunch Break

13:30 – Event Hubs update Dan Rosanova, Group Principal Program Manager, Microsoft

He started with introduction on Event Hubs (PaaS) offering along with messaging pattern – Queue. He later explained different protocols like Kafka, Http and AMQP and how Event Hubs and Kafka  differ from queues. Explained Kafka / Event Hubs conceptual architecture along with What Microsoft has to offer in Event Hubs for Apache Kafka.

Four offering of Kafka in Azure

  • Clustered offering
  • PaaS service
  • Marketplace offering
  • DIY with IIS

Later he showed stats related to Before and After load balancing algorithm improvements.

Finally the session ended with a summary –

  • There is no scale you need that we cannot do
  • The most available messaging platform in any cloud
  • Extremely affordable

14:00 – Service Bus Update – Ashish Chhabria, Program manager, Microsoft

He spoke about the High Availability + Disaster recovery (50 regions) and Geo-Disaster Recovery available for Premium namespaces.

3 weeks back a feature called In-place upgrade was introduced where your standard namespace can be migrated to premium namespace. He also provided a quick demo on how to migrate standard namespace to premium namespace.

Enterprise features announced included,

  • VNET Service Endpoints and Firewalls (where users can limit access to your namespace from specific VNET or IP).
  • Managed Service Identity & RBAC (preview)

For .Net and Java SDKs they added support with regards to Management support, Web Socket and Transaction support.

SDK related to Python will be available soon.  

Also inputs related to –

  • Logic App connector for Service Bus
  • Service Bus Queue as event handler (preview)
  • Data Explorer for Service Bus will be out soon.

14:30 – How Microsoft IT does Integration – Mike Bizub, Microsoft CSE&O

He started the session with a recap on B2B approach and how customers dealing with larger volume of data are moving from BizTalk Server to PaaS offerings.  Logic Apps to support X12, EDIFACT were created.


  • Hot-Path
  • Warm-Path

He also spoke how CI/CD pipelines, deployments, Code repository is managed using ALM and DevOps. Test related to Unit and Functional test along with defining of policies for code review and security compliance.

Security and Governance is important and its very critical how your metadata is managed. (SAS tokens, Managed identity, secrets etc).

14:50 – AIS Migration Story – Vignesh Sukumar, SE Core Engineering Service team, Microsoft

He started with a 20 seconds quick wake up activity and spoke about the Metadata driven Architecture along with Migration Accelerators (TPM tool) to migrate BizTalk to PaaS architecture. These Accelerators can be used to migrate artifacts like schemas, trading partners, orchestration in a click. This will reduced the time of migration from days to few hours (approx. 3 hours per transactions).

He also provided inputs related to EAI / Disaster recovery – How it can be important for high availability.

You can access Tools and scripts from the below URL.

15:30 – Enterprise Integration using Logic Apps – Divya, Swarnkar, Senior Program Manager, Microsoft

She started her presentation using a scenario explaining the current state of Contoso Grocery Store. How can Azure be effective to track down the wasted units when the storage malfunctions.

Installation of IOT sensors to the storage units, equipped their staff to receive notifications.

Logic App IOT trigger > Message Transform > Notify Store team > Create Maintenance order SAP

SAP Send equipment change request > Message transform > Create workorder in D365 > Notify Maintenance team

New Improvements 

  • Integration Account (Standard)
    • Limits for EDI artifacts raised to 1000
  • AS2 V2
    • Is core action – more performant, no limit on timeout
  • Monitoring
    • Batch trigger/action – monitor batch activity, release criteria, correlate items in batch, sources run of resubmitted run.

Announcing Today

  • Rosetta Net connector Logic App (Public Preview). She also showed some quick demo related to it.

Coming really Soon

  • Data Gateway across subscriptions

Future Roadmap

Extended support for Health Care, additional connectors, integration accounts etc.

  • Industry Verticals
    • Business Verticals – healthcare (HL7)
    • Connectors – Oracle EBS, Netsuite, ODBC
    • Connector marketplace
  • Configuration store
  • Integration account – dev SKU, DR
  • Monitoring
    • Azure AppInsights support for Logic Apps.

16:00 – Serverless Stories and Real Use Cases – Thiago Almeida, Microsoft

The main focus or Agenda for his session was Serverless Integration.

  • What is meant by Serverless Integration
  • The file batch challenge
  • Python function use cases
  • Storage stats tracking
  • Durable functions
  • IOT
  • Azure Integration Services

He started with what is meant by Serverless integration and what are the proposed solutions for solving the file batch challenge to append files in same batch in a single JSON file. He also showed few use cases for their customer – Storage Stats tracking (scenario), escalation workflow, durable function workflow – Fujifilm.

A purchase order scenario can be solved in combination with Logic Apps and Service Bus. PaaS + SaaS can be utilized for

  • On-prem connectivity
  • Workflow automation
  • API Management etc.

16:45 – Microsoft Flow Sarah Critchley / Kent Weare, Microsoft MVP – Business Applications

The main focus of the session was Microsoft flow and how organizations can automate their workflows without writing a single line of code. Microsoft flow and power apps bring new extensibility scenarios.

They also specified some Key capabilities of Microsoft flow –

  • Built in Approval Centre
  • Data Loss prevention policies
  • Business process flow
  • Geo triggering
  • Review and start flow from mobile
  • Package Flow apps, entities & dashboards to move between environments.

Microsoft Flow mobile and Devices allows users to work less and perform activities from anywhere. Create New flows, Get push notifications, Discover buttons, Use button widgets, Monitor flow activities, grant approvals.

Sarah spoked about Dynamic365 and how it’s used in department likes Sales, Marketing & Finance. She also spoke about PaaS offering from Microsoft – PowerApps.  The session finally wrapped up with a slide focusing on Key takeaways.

17:30 The final session was cancelled and awards were given by BizTalk360 to their valuable customers. The day ended finally with – Networking, Reception and some beers.

You can check more insights on Day 2 here.

byBrick Development IoT Conference 2018 on Tynningsö

Update: for some reason this post was un-published, so re-posting it again..

Wow I say!! Just wow! It was one helluva awesome weekend!

On Friday the 19th of October we all filed away to Tynningsö, located in the beautiful archipelago of Stockholm (Waxholm), for a weekend of IoT, socialising and insights.

We had rented a very nice house on the island and 10 of us headed off.


The cabin we hired in Tynningsö

The agenda was:

  • Friday night
    • Prep. approx. 10 Raspberry Pi 3 B+
    • Eat and be merry
    • Future Planning and Presentation(s)
    • Group work – more prep of the Pi
  • Saturday
    • 4hrs “intro” to Azure IoT from a trainer
    • Group work – think value, IoT
    • Dinner
    • Group work – spent time well past midnight
  • Sunday
    • Group work – presentation, final touches
    • Solution presentation
    • Home

What was so cool about this weekend, was the intense engagement that was across the board. We split up into two teams and gave everybody free reign on the sensors, touch screens et al that was brought along with us. The aim was to learn a bit more about IoT, get some hands-on experience with the newest cloud trends and focus on practical applications.

We had a trainer from 1337, Mats Tornberg, who was incredibly enthusiastic giving us the intro to Azure IoT services and set us off on our own path.

Azure Event Grid

Azure Event Grid is a platform as a service offering (PaaS) which is an event routing mechanism that helps you to subscribe to events.

There can be different modes of communication where message can be transmitted from one party to other.

  • One-way
  • Bi-Directional
  • Push and Pull mechanism etc

Azure offers various messaging service like –

  • Notification Hubs – where you can Push mobile notifications
  • Logic Apps – which helps you to Schedule, build and automate processes (Workflows)
  • Azure Service Bus – Exchanging information between 2 parties.
    • Topics, Queues and Relays
  • Azure Event Hub – Also part of service bus namespace. One-way event processing system that can take millions of events per second and can be used for analysis and storage
  • IoT Hub
  • Azure Event Grid – Based on publish\subscribe model. Easy to relate if you have been working with Microsoft BizTalk Server.

Now you may wonder which once to use from Azure Event Grid, Azure Event Hubs and Azure Service Bus?  – In case where you need to process order or do some financial transactions you must go with Azure Service Bus. While on the other hand when you need to stream large volume of data like telemetry data, event logs i.e. Messaging at scale you must consider using Azure Event Hub. Azure Event Grid can be used when you want to react on an event.

Azure Event Grid is based on Publish/Subscribe model when there can be a single publisher but multiple subscriber that subscribes to those events.

event Grid .jpg

Event Grid Flow

Event occurs within a Publisher and it pushes those events to Event Grid Topic. Subscribers are listening to that topic. Event Handlers are the one responsible for handling those events.

In the below Example we have created an Event Subscription which is subscribing to Event Grid Topics (Publisher).

Topic Type can be any from the following –

  • Event Hubs Namespace
  • Storage Accounts
  • Azure Subscriptions
  • Resource Groups
  • Event Grid Topics (Used for Demo)


We have used Postman which is doing a POST request to Azure Event Grid Topic endpoint. This topic will be subscribed by an Event Subscription and the messages will be sent to subscriber Endpoint i.e Request Bin in this case.





Azure Event Grid ensures reliability and performance for your apps. You can manage all your events in once place and lastly you just need to pay per event.



Benchmarking Applications with BenchmarkDotNet – Introduction


BenchmarkDotNet is a Library that enable Developers to define Performance Tests for Applications, it abstracts the complexity from the Developer and allows a degree of extensibility and customisation through its API. Developers can get started pretty quick and refer to is documentation for Advanced Features.

This is not a post to actually perform benchmarking but rather introduce BenchmarkDotNet to Developers. Sources used are available in Github.



Typically, when we develop a piece of software some degree of testing and measuring is warranted, the level of it depends on the complexity of what we are developing, desired coverage, etc. Unit or Integration Tests are by default part of any project most of the time and in the mind if any software developer, however, when it comes with benchmarking things tend to be a little different.

Now, as a very personal opinion, benchmarking and testing is not the same, the goal of testing is functionality, comparing expected vs actual results, this being a very particular aspects of Unit Tests, we also have Integration Tests which present different characteristics and goals.

Benchmarking on the other hand is about measuring execution, we will likely establish a Baseline that we can compare against, at least once. there are things we’d be interested in like execution time, memory allocation, among other performance counters. Accomplishing this can be challenging, but most importantly, to do it right. so here is where BenchmarkDotNet comes into the play, like with other aspects and problems of when developing software, it is a library that abstracts various aspects from the Developer to define, maintain and leverage Performance Tests.

Some facts about BenchmarkDotNet

  • Part of the .NET Foundation.
  • Runtimes supported .NET Framework (4.6.x+), .NET Core (1.1+), Mono.
  • OS supported: Windows, macOS, Linux


Benchmarking our Code

While it is unlikely that every piece of code in the system will need to be Benchmarked, there might be scenarios where we are able to identify modules that are part of the a critical path in our System and given their nature they might subject to benchmarking, what to measure and how intensive would vary on a case-by-case basis.

Whether we have expectations, we need to ensure that critical modules in a System are performing at it most optimal point if possible or at least acceptable so that we can establish a baseline that we can used to compare against as we do gradual and incremental improvements. We need to define a measurable, quantifiable truth for performance indicators.

Usage of a Library

Reasons we have many and are applicable to practically any library, favour reusability, avoid reinventing the wheel but most importantly is about relying on some that is heavily tested and proven to work, in this particular case we care about how accurate the results are, and that depends on the approach, we have all seen examples out there where modules like Stopwatch is used and while it is not entirely bad, it is unlikely it will ever provide the same accuracy BenchmarkDotNet provides nor the flexibility or extensibility, to mention some features BenchmarkDotNet provides:

  • Allows the Developer to target multiple runtimes through Jobs, for instance various version of the .NET Framework and .NET Core, this is instrumental to prevent extrapolating results.
  • Generation of reports in various formats that can be analysed.
  • Provides execution isolation to ensure running conditions are optimal.
  • Takes care of aspects like performing several iterations of execution and warm-up.

More information about the actual flow can be found in the How it works section in the Documentation.

Read More