INTEGRATE 2019 – Day 3 | byBrick Development

Wednesday 5th June 2019

The Final Day at Integrate 2019 !

09:00  Scripting a BizTalk Server Installation – Senior Premier field Engineer – Microsoft Azure

He started by explaining Why we must script the installation. He explain the concept by giving an example of serving a plate at the restaurant.

Predictability

  • Streamlined environments
  • Remember all details
  • Repeatable execution
  • Less errors than manual installation

He also suggested what must you script

  • Things that can be controlled
  • Things that would not change
  • Good Candidates
    • Windows feature
    • Provision VM in Azure
    • BizTalk features and group configurations
    • MSDTC settings
    • Host and Host Instances
  • Bad Candidates
    • This that would change over time

He also suggested what must we look out for before we start  –

  • Set a time frame
  • A proper documentation of execution process. Scripting is not a replacement for documentation
  • Document your baseline 
  • Decide and standardize your developer machines, disaster recovery prep and test environments.

Good Practice –

  • Main code should orchestrate process
    • Create functions for task
  • Name scripts to show order to run them
  • Write a module for common functions
    • Logging, Prompts, Checking etc
  • Use a common timestamps for a generated file
  • Be moderate while error handling
    • Easy to spend a lot of time on un-likely errors
  • Debugging is a good friend

Common Issues –

  • Wrong binaries
  • Permissions to create Cluster resources.
    • No access rights in AD etc

While configuring the groups, replace tokens with values. Later he showed a quick demo to create a host and host instances using PowerShell scripts along with code walkthrough slides.

For more information you can view below Blog post or access the GitHub repository.

https://skastberg.wordpress.com

https://github.com/skastberg/biztalkps

09:45 BizTalk Server Fast & Loud Part II : Optimizing BizTalk – Sandro Pereira – Microsoft MVP

This session was called part 2 because it was a continuation of his session which he delivered few years back at Integrate. He started of his session by take a real life example of Cars and using his components to compare it with Biztalk artifacts. Few as below 

  • Car chassis – BizTalk Server
  • Engine – SQL Server
  • Battery – Memory
  • Tiers – Hard Drive

He also gave inputs related to optimizing performance.

  • Choose right set of infrastructure for your BizTalk environment
  • How can we use queuing technique to process large amount of data
  • He also suggested how you can 1st observe and see how your environment behaves, analyze it and apply necessary fixes. Then repeat the same till the issue is fixed.
  • You can also redesign in case your exisiting BizTalk solution is causing a bottleneck.
  • Also Use minimum tracking to avoid database and disk performance issue.
  • He also showed SQL server memory configuration which helps in optimizing the message processing

He also did a walk-through of 2 real time scenarios and how he managed to improve the performances.

Various Solution to optimize performance –

  • Recycling BizTalk and IIS
  • Tune performance with configurations
  • SQL Affinity – Max no of memories
  • Tweaking MQ series polling intervals
  • Set Orchestration dehydration property

He finally ended the session by sharing his details and blog information

https://blog.sandro-pereira.com

10:25 Changing the game with Serverless solutions – Michael Stephenson, Microsoft MVP – Azure

He started the session by giving a small introduction about himself along with the creation of Integration Playbook as a community which will provide integration architecture view on many technologies which make up the Microsoft Integration technology stack.

His entire session revolved around the application he build called Online store (Shopify). He showed and discussed on Serverless components used in his solution. He also said how he is using the Application Insights to gather various stats and how can he uses it to improve customer experience.

The building of  Shopify involves various components like – API Management, Power BI, Power Apps, SQL Azure DB, Service Bus, Azure Functions (Shopping Card Add, Shopping Cart Update, Order Add, Product Update).

He explained various stuff which he uses for this application –

  • Business Intelligence Platform ( SQL Azure DB, Cognitive Services, Power BI)
  • Integration Platform ( Service Bus, Functions, Logic Apps)
  • Communication & Collaboration ( Microsoft Teams, Bot Service, Microsoft QNA Maker)
  • Systems of Engagement ( Power Apps)
  • Product management & order fulfilment (Oberlo, Manual)
  • Marketing & Social Media ( Google AdWords, facebook)
  • Payment suppliers (PayPal, Stripe)

He later showed a quick demo on how is has implemented webhook in his solution. How he is using logic Apps to store most popular products searched or added to the cart. He later uses this information stored and displays the category of Most popular product on his site ( last 2 weeks).

He also said that for any customer, shipment of data and tracking is a key. He has implemented a web application tab which allows the customer to track the order.

He finally ended the session by sharing his thoughts on how can you build a  Serverless solution.

Thoughts –

  • With Azure a small business can build an enterprise capable online store
  • We can implement back office processes to support the business
  • The data platform let us gain the insights we want
  • The cost/capacity/scale can go from small and grow to very large
  • As a consultant I am seeing new business model and engagements with customers

11:35 Adventures of building a multi-tenant PaaS on Microsoft Azure – Tom Kerkhove, Azure Architect at Codit, Microsoft Azure MVP, Creator of Promitor

He started by giving a short introduction about himself. The presentation had around 80 slides but he managed to make us understand each and every point nicely. The 1st topic that he covered was related to Scale up and down and Scale In and out along with choosing the right compute infrastructure. ( As control increases, so does the complexity).

He provided some inputs related to scaling ( Serverless, PaaS, CPaaS).

Designing for scale with PaaS

  • Good ( define how it scales, scaling awareness)
  • Bad ( define how it scales, hard to determine perfect scaling rules)
  • Ugly (Be-aware of flapping, Be-aware of infinite scaling loops

Designing for scale with Serverless

  • Good ( The service handles scaling for you)
  • Bad ( The service handles scaling for you, doesn’t provide lots of awareness)
  • Ugly ( Dangerous to burn lot of money)

Designing scale with CPaaS

  • Good (Share resources across different teams, Serverless scaling capabilities are available with Virtual Kublet and Virtual Node)
  • Bad ( You are in charge of providing enough resources, Scaling can become complex
  • Ugly ( Take a lot of effort to ramp up on how to scale, there is lots to manage)

Later he shared few inputs related to Multi-tenancy and Choosing a sharding strategy along with determining tenants. More information (http://bit.ly/sharding-pattern )

 With regards to Monitoring he suggested that training the developers to use own tools and test automation would be a shared responsibility. ( Health checks, enrich your telemetry, Alerts handling, RCA). Related to consuming Webhooks  – Always route the webhooks through API gateway ( This decouples the webhook from your internal architecture).

The Lifecycle of a service (Embrace Change)

Private Preview (rough version of product> Public Preview (available to masses ) > General Available ( covered by SLA, supported version) > The End ( Deprecated, Silent sunsetting, Reincarnation in 2.0) He ended the session on a positive node with a quote Embrace the change – “Change is coming, so you’d better be prepared”.

12:25 Lowering the TCO of your Serverless solution with Serverless 360 Michael Stephenson, Microsoft MVP – Azure

He started the session by providing a reality of cloud support process. The main idea was to highlight how the entire support system works and who is responsible for what and how an un-skilled person can mess-up your solution. He later said that how Serverless360 can be used to assign a support team based on entities in a composite application.

He later showed a Service Map / Topology in Serverless 360 and how it can be useful. This topology determines or orchestrate your business application.

He later also showed the Atomic scope architecture and how BAM (atomic scope) is now embedded with Serverless 360. He later shared a quick demo related to BAM functionality using Logic Apps.

Data sources for BAM include ( Queue, Logic App, Function, API Management, Custom Code )

Key Feature to Democratize Support

  • I can visualize my Azure Estate and know what goes where
  • I can visualize how the app works
  • I can securely perform management actions
  • I can monitor to see if my service is operating properly
  • I can troubleshoot problems with individual transactions
  • I have least privilege access and auditing of who does what

13:45 Microsoft Integration, the Good the Bad and the Chicken way – Nino Crudele, Microsoft MVP – Azure

This session was full of energy and he started the session with some introduction about himself and sharing the news of being a Certified Ethical Hacker. He started the session by talking about good old days when he worked with BizTalk and some of his experience while moving to Azure. He thinks that BizTalk is still the best possible option for complex on-prem and hybrid scenario.

Rather than technology the real challenge is Azure Governance. Its everything and without it you cant even use Azure. Governance is everywhere, its all around us. Even Now.

My rule life is – You have only three ways to achieve a mission or task. Be Brave !

The Good

       The Bad

              The chicken way

He later spoke about Azure Scaffold Earlier and Now ( Resource Tags, Resource Groups, RBAC, Subscriptions, Resource Locks, Azure Automation, Azure Security standards etc ) along with Management group and policies.

Does exist a God in Azure Governance ? Answer is yes – Global admin. He is the one who restricts access to anyone or grants for a certain period of time. Use Privileged Identity Management.

There are lots of fancy tools available out in market which helps you to analyze the company statistics, come what may -but the business loves excel. Taking an example of Finance department, what are they really interested about – Totals Usage Quantity by Regions, Locations, Department.

Use Pricesheet (https://ea.azure.com/report/pricesheet) from Azure portal to understand the price and negotiate for a discount from Microsoft ( not applicable to everything).

With regards to security he said that its good to have a dedicated team and resource handling it. Various tools which will help  – Burpsite, Nmap, Snort, Metasploit, Wireshark, Logstalgia). Network management is core and a good practice is to use Centralize firewall like Fortigate. Logstalgia helps us to analyze network traffic and how packets are travelling. Visualization of DDos attack is great.

He also showed a quick glance on how the Logstalgia (website access log visualization – https://logstalgia.googlecode.com ) works and how effective it is.  

A good naming standard is must and he also showed a tool which helps to set it. There are lots of options in Azure each has pros and cons. In case you are stuck with anything create a support ticket in case your org has Enterprise agreement (support is free – Technical + advisory).

Consider your Azure solution like your home and don’t trust anybody, there is always a possibility that someone could inject scripts ( Hashing = Integrity ) Any change detected should be alerted and execution must stop.

Documentation is a key. He said how can you utilize tools like Cloudockit (documentation for cloud architecture – https://www.cloudockit.com/samples ). He showed a tool he build which is freely available https://aziverso.com/ ( The Azure Multiverse Add-in for Office).

15:30 Creating a processing pipeline with Azure Function and AIS – Wagner Silveria, Microsoft MVP – Azure

The last session of Integrate 2019 and the 3 days passed so quickly. He started the session by give a quick introduction. Later he described a scenario and how the solution looked like year a go and how they updated the solution. Scenario – EDI data received from an API sent over to Big Data repository for reporting and mining.

He later showed what changed which included

  • Azure functions (edifact support via .Net package)
  • Azure storage ( claim check pattern to use Service Bus)
  • Application Insights

He later showed a quick demo related to it. How the Exception handling is taken care off.

Dead Letter Queue Management

  • Logic Apps polling subscriptions DLQ every 6 hours
  • Each subscriptions CDL could have its own logic
  • Email notifications
  • Error blob storage

1 year after if we talk about the present day with net technologies in place. What would be the possible candidates

  • Integration Service engine
  • Azure Durable Functions
  • Event Grid

Some of the important features that released in last year includes

  • Azure Functions Premium
  • Integrated support to key vault
  • Integrated support for MSI
  • Visual network support  and Service endpoints

Finally he summarized the session with below bullet points –

  • Look at various technology and options available
  • Watch out for operational cost
  • Road map of the components
  • Big picture and where your solution fits.

I would sum-up the highlights by saying it was plenty to learn and gain from Integrate 2019. Happy to be part of it.

One Comment on “INTEGRATE 2019 – Day 3 | byBrick Development

  1. Pingback: INTEGRATE 2019 – Day 2 | byBrick Development | The BLOG

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: