The BLOG

Gilded Rose TDD & Refactoring Kata

As part of my summer learning plan, I also wanted to practice refactoring and test driven development. I have been doing the Gilded Rose refactoring kata  for the past 3 days and I have to say that it is really a great way to practice. After every iteration, I noticed an improvement in the code that I was producing and also in the way that I arrive at the solution.

I put my code up on github but it was only in the second and third days that i started creating branches for the solutions that I got. In hindsight, I should have done this from the start as it is a good way to look back at the various ways  you come up with.

I recently finished reading Working Effectively with Legacy Code by Michae Feathers and so it was great to be able to put that knowledge to practice. The book makes for a great reference so I actually got the hardcopy as it would be much easier to flip back and forth than the kindle version.

The testing framework already being used by the project is XUnit so it was a chance for me to learn the framework as well. Apart from XUnit, I thought it would be a good opportunity to also start learning how to use fluent assertions.

The following is a timelapse of one of the iterations of the kata which I felt was fairly presentable.

My Approach

Before making any changes to the original code,  I wanted to make sure that I had some tests to verify that the current code is working as described and that I will not breaking anything.

I did have to modify program to be public in order to create an instance of it. I also created a constructor that would assign the Items property on Initialization as by specification, we are not supposed to touch the Item class or Item property.

I also added a GetItem() method to return the items so i can verify that the correct changes were made.  Because items is passed by reference, i should just be able to do the assertions on items instead of having to create the local variable changedItems.

image

Next was a simple conversion of the for loop into a foreach loop to make it easier to extract a method that I can start testing.

fortoforeach

Visual studio has some great options for automatic refactoring which you can find when you press Ctrl+. on lines you want to refactor. I basically highlighted everything in the foreach loop and extracted that as a method.

Extract Method

No major changes yet so  the test should still run

extractedMethodResult

I then created parameterized tests just for the method so i can easily add more cases. Looking at the spec and looking at the current sample items, it didn’t really test out the extremes like items that are already 50 or 0 and need to be degraded or upgraded in quality. I added a few more test cases so that we exhaust all possibilities. At this point, I left the conjured items to it’s old functionality and chose to deal with it once i have a better view of the code.

Testing  Update Item

At this Point I wanted to extract UpdateItemQuality into its own class. I did the same with the test associated with it as well to make it a little bit more organized.

Create Item Processor

Now UpdateItemQuality is still a huge and complex method which I didn’t quite want to attack just yet so I made some minor improvements like merging nested if statements using Visual Studio’s automated refactoring just to make sure that I don’t oversimply the statements and end up changing the program every so slightly.

UpdateItemQuality

Because I am being non-confrontational, I add the functionality to get a category for an item based on the item name. Again because  we want this to be TDD, we create the tests first then develop the methods.

GetCategory
GetCategory

Instead of having the ItemProcessor process all the different items, I wanted to make use of polymorphism so a more specific type of processor worries about the implementation depending on the category so I then created the tests (and corresponding classes for the different processors.

AgedBrieProcessor
image

Once I isolated the items into their own update methods, it became fairly easy to refactor the UpdateItemQuality method by just removing the irrelevant cases for each item time.

reducedAgedBrie

Because I have my test with exhaustive cases, it gave me the confidence to make all the drastic changes. This also made it easier to then add the new functionality for conjured items once everything was refactored. See a timelapse of the full session below:

This post originally appeared on the coding hammock

INTEGRATE 2019 – Day 3 | byBrick Development

Wednesday 5th June 2019

The Final Day at Integrate 2019 !

09:00  Scripting a BizTalk Server Installation – Senior Premier field Engineer – Microsoft Azure

He started by explaining Why we must script the installation. He explain the concept by giving an example of serving a plate at the restaurant.

Predictability

  • Streamlined environments
  • Remember all details
  • Repeatable execution
  • Less errors than manual installation

He also suggested what must you script

  • Things that can be controlled
  • Things that would not change
  • Good Candidates
    • Windows feature
    • Provision VM in Azure
    • BizTalk features and group configurations
    • MSDTC settings
    • Host and Host Instances
  • Bad Candidates
    • This that would change over time

He also suggested what must we look out for before we start  –

  • Set a time frame
  • A proper documentation of execution process. Scripting is not a replacement for documentation
  • Document your baseline 
  • Decide and standardize your developer machines, disaster recovery prep and test environments.

Good Practice –

  • Main code should orchestrate process
    • Create functions for task
  • Name scripts to show order to run them
  • Write a module for common functions
    • Logging, Prompts, Checking etc
  • Use a common timestamps for a generated file
  • Be moderate while error handling
    • Easy to spend a lot of time on un-likely errors
  • Debugging is a good friend

Common Issues –

  • Wrong binaries
  • Permissions to create Cluster resources.
    • No access rights in AD etc

While configuring the groups, replace tokens with values. Later he showed a quick demo to create a host and host instances using PowerShell scripts along with code walkthrough slides.

For more information you can view below Blog post or access the GitHub repository.

https://skastberg.wordpress.com

https://github.com/skastberg/biztalkps

09:45 BizTalk Server Fast & Loud Part II : Optimizing BizTalk – Sandro Pereira – Microsoft MVP

This session was called part 2 because it was a continuation of his session which he delivered few years back at Integrate. He started of his session by take a real life example of Cars and using his components to compare it with Biztalk artifacts. Few as below 

  • Car chassis – BizTalk Server
  • Engine – SQL Server
  • Battery – Memory
  • Tiers – Hard Drive

He also gave inputs related to optimizing performance.

  • Choose right set of infrastructure for your BizTalk environment
  • How can we use queuing technique to process large amount of data
  • He also suggested how you can 1st observe and see how your environment behaves, analyze it and apply necessary fixes. Then repeat the same till the issue is fixed.
  • You can also redesign in case your exisiting BizTalk solution is causing a bottleneck.
  • Also Use minimum tracking to avoid database and disk performance issue.
  • He also showed SQL server memory configuration which helps in optimizing the message processing

He also did a walk-through of 2 real time scenarios and how he managed to improve the performances.

Various Solution to optimize performance –

  • Recycling BizTalk and IIS
  • Tune performance with configurations
  • SQL Affinity – Max no of memories
  • Tweaking MQ series polling intervals
  • Set Orchestration dehydration property

He finally ended the session by sharing his details and blog information

https://blog.sandro-pereira.com

10:25 Changing the game with Serverless solutions – Michael Stephenson, Microsoft MVP – Azure

He started the session by giving a small introduction about himself along with the creation of Integration Playbook as a community which will provide integration architecture view on many technologies which make up the Microsoft Integration technology stack.

His entire session revolved around the application he build called Online store (Shopify). He showed and discussed on Serverless components used in his solution. He also said how he is using the Application Insights to gather various stats and how can he uses it to improve customer experience.

The building of  Shopify involves various components like – API Management, Power BI, Power Apps, SQL Azure DB, Service Bus, Azure Functions (Shopping Card Add, Shopping Cart Update, Order Add, Product Update).

He explained various stuff which he uses for this application –

  • Business Intelligence Platform ( SQL Azure DB, Cognitive Services, Power BI)
  • Integration Platform ( Service Bus, Functions, Logic Apps)
  • Communication & Collaboration ( Microsoft Teams, Bot Service, Microsoft QNA Maker)
  • Systems of Engagement ( Power Apps)
  • Product management & order fulfilment (Oberlo, Manual)
  • Marketing & Social Media ( Google AdWords, facebook)
  • Payment suppliers (PayPal, Stripe)

He later showed a quick demo on how is has implemented webhook in his solution. How he is using logic Apps to store most popular products searched or added to the cart. He later uses this information stored and displays the category of Most popular product on his site ( last 2 weeks).

He also said that for any customer, shipment of data and tracking is a key. He has implemented a web application tab which allows the customer to track the order.

He finally ended the session by sharing his thoughts on how can you build a  Serverless solution.

Thoughts –

  • With Azure a small business can build an enterprise capable online store
  • We can implement back office processes to support the business
  • The data platform let us gain the insights we want
  • The cost/capacity/scale can go from small and grow to very large
  • As a consultant I am seeing new business model and engagements with customers

11:35 Adventures of building a multi-tenant PaaS on Microsoft Azure – Tom Kerkhove, Azure Architect at Codit, Microsoft Azure MVP, Creator of Promitor

He started by giving a short introduction about himself. The presentation had around 80 slides but he managed to make us understand each and every point nicely. The 1st topic that he covered was related to Scale up and down and Scale In and out along with choosing the right compute infrastructure. ( As control increases, so does the complexity).

He provided some inputs related to scaling ( Serverless, PaaS, CPaaS).

Designing for scale with PaaS

  • Good ( define how it scales, scaling awareness)
  • Bad ( define how it scales, hard to determine perfect scaling rules)
  • Ugly (Be-aware of flapping, Be-aware of infinite scaling loops

Designing for scale with Serverless

  • Good ( The service handles scaling for you)
  • Bad ( The service handles scaling for you, doesn’t provide lots of awareness)
  • Ugly ( Dangerous to burn lot of money)

Designing scale with CPaaS

  • Good (Share resources across different teams, Serverless scaling capabilities are available with Virtual Kublet and Virtual Node)
  • Bad ( You are in charge of providing enough resources, Scaling can become complex
  • Ugly ( Take a lot of effort to ramp up on how to scale, there is lots to manage)

Later he shared few inputs related to Multi-tenancy and Choosing a sharding strategy along with determining tenants. More information (http://bit.ly/sharding-pattern )

 With regards to Monitoring he suggested that training the developers to use own tools and test automation would be a shared responsibility. ( Health checks, enrich your telemetry, Alerts handling, RCA). Related to consuming Webhooks  – Always route the webhooks through API gateway ( This decouples the webhook from your internal architecture).

The Lifecycle of a service (Embrace Change)

Private Preview (rough version of product> Public Preview (available to masses ) > General Available ( covered by SLA, supported version) > The End ( Deprecated, Silent sunsetting, Reincarnation in 2.0) He ended the session on a positive node with a quote Embrace the change – “Change is coming, so you’d better be prepared”.

12:25 Lowering the TCO of your Serverless solution with Serverless 360 Michael Stephenson, Microsoft MVP – Azure

He started the session by providing a reality of cloud support process. The main idea was to highlight how the entire support system works and who is responsible for what and how an un-skilled person can mess-up your solution. He later said that how Serverless360 can be used to assign a support team based on entities in a composite application.

He later showed a Service Map / Topology in Serverless 360 and how it can be useful. This topology determines or orchestrate your business application.

He later also showed the Atomic scope architecture and how BAM (atomic scope) is now embedded with Serverless 360. He later shared a quick demo related to BAM functionality using Logic Apps.

Data sources for BAM include ( Queue, Logic App, Function, API Management, Custom Code )

Key Feature to Democratize Support

  • I can visualize my Azure Estate and know what goes where
  • I can visualize how the app works
  • I can securely perform management actions
  • I can monitor to see if my service is operating properly
  • I can troubleshoot problems with individual transactions
  • I have least privilege access and auditing of who does what

13:45 Microsoft Integration, the Good the Bad and the Chicken way – Nino Crudele, Microsoft MVP – Azure

This session was full of energy and he started the session with some introduction about himself and sharing the news of being a Certified Ethical Hacker. He started the session by talking about good old days when he worked with BizTalk and some of his experience while moving to Azure. He thinks that BizTalk is still the best possible option for complex on-prem and hybrid scenario.

Rather than technology the real challenge is Azure Governance. Its everything and without it you cant even use Azure. Governance is everywhere, its all around us. Even Now.

My rule life is – You have only three ways to achieve a mission or task. Be Brave !

The Good

       The Bad

              The chicken way

He later spoke about Azure Scaffold Earlier and Now ( Resource Tags, Resource Groups, RBAC, Subscriptions, Resource Locks, Azure Automation, Azure Security standards etc ) along with Management group and policies.

Does exist a God in Azure Governance ? Answer is yes – Global admin. He is the one who restricts access to anyone or grants for a certain period of time. Use Privileged Identity Management.

There are lots of fancy tools available out in market which helps you to analyze the company statistics, come what may -but the business loves excel. Taking an example of Finance department, what are they really interested about – Totals Usage Quantity by Regions, Locations, Department.

Use Pricesheet (https://ea.azure.com/report/pricesheet) from Azure portal to understand the price and negotiate for a discount from Microsoft ( not applicable to everything).

With regards to security he said that its good to have a dedicated team and resource handling it. Various tools which will help  – Burpsite, Nmap, Snort, Metasploit, Wireshark, Logstalgia). Network management is core and a good practice is to use Centralize firewall like Fortigate. Logstalgia helps us to analyze network traffic and how packets are travelling. Visualization of DDos attack is great.

He also showed a quick glance on how the Logstalgia (website access log visualization – https://logstalgia.googlecode.com ) works and how effective it is.  

A good naming standard is must and he also showed a tool which helps to set it. There are lots of options in Azure each has pros and cons. In case you are stuck with anything create a support ticket in case your org has Enterprise agreement (support is free – Technical + advisory).

Consider your Azure solution like your home and don’t trust anybody, there is always a possibility that someone could inject scripts ( Hashing = Integrity ) Any change detected should be alerted and execution must stop.

Documentation is a key. He said how can you utilize tools like Cloudockit (documentation for cloud architecture – https://www.cloudockit.com/samples ). He showed a tool he build which is freely available https://aziverso.com/ ( The Azure Multiverse Add-in for Office).

15:30 Creating a processing pipeline with Azure Function and AIS – Wagner Silveria, Microsoft MVP – Azure

The last session of Integrate 2019 and the 3 days passed so quickly. He started the session by give a quick introduction. Later he described a scenario and how the solution looked like year a go and how they updated the solution. Scenario – EDI data received from an API sent over to Big Data repository for reporting and mining.

He later showed what changed which included

  • Azure functions (edifact support via .Net package)
  • Azure storage ( claim check pattern to use Service Bus)
  • Application Insights

He later showed a quick demo related to it. How the Exception handling is taken care off.

Dead Letter Queue Management

  • Logic Apps polling subscriptions DLQ every 6 hours
  • Each subscriptions CDL could have its own logic
  • Email notifications
  • Error blob storage

1 year after if we talk about the present day with net technologies in place. What would be the possible candidates

  • Integration Service engine
  • Azure Durable Functions
  • Event Grid

Some of the important features that released in last year includes

  • Azure Functions Premium
  • Integrated support to key vault
  • Integrated support for MSI
  • Visual network support  and Service endpoints

Finally he summarized the session with below bullet points –

  • Look at various technology and options available
  • Watch out for operational cost
  • Road map of the components
  • Big picture and where your solution fits.

I would sum-up the highlights by saying it was plenty to learn and gain from Integrate 2019. Happy to be part of it.

INTEGRATE 2019 – Day 2 | byBrick Development

Tuesday 4th June 2019

The day 1 was packed with lots of information, let have a look at what day 2 had to offer.

08:30 – 5 tips for production ready Azure Functions – Alex Karcher, Program manager – Microsoft

The day 2 started off with a presentation from Alex Karcher where he shared five major tips which included,

  • Serverless APIs & HTTP (premium plan) scale
  • Event stream Processing and scaling
  • Options for Event Hubs scaling
  • Inner and Outer loop development ( Azure DevOps CI/CD)
  • Monitoring & Diagnostics 
    • Application Insights ( easy to integrate with Functions )
    • Distributed tracing
    • Application Map trace and diagram ( view dependencies)

He also spoke about the consumption plan and enabling Auto scaling in premium plan has more control than App service plan.

09:15 API management deep dive – Part 1 – Miao Jiang, Product Management – Microsoft

He spoke about the automation challenges with API management which included below 2 bullet points –

  • How to automate deployment of API’s into API management
  • How to migrate configurations from one environment to another

He suggested an approach how you can build a CI / CD pipeline using ARM template. He also showcased deployment of Food Truck application to API management.

Creator Tool – It was build to generate an ARM template so that you can be used to deploy it to API management

Extractor Tool – Which can be used to extract existing data from published API’s. He used VS Code and API management extension (private preview) to show case a demo. You can perform all the necessary operations event without moving to Azure portal.

Takeaways  –

  • Use separate service instance for environments
  • Developer and Consumption tiers are good choices for pre-production
  • Templates based approach is recommended
    • Consistent with the rest of Azure services
    • RBAC
    • Scalable
  • Modularizing templates provides wide degree of flexibility
    • Access control, governance, granular deployments

09:45 Event Grid Update – Bahram Banisadr, Program Manager – Microsoft

He started the session by speaking about Why is there a need for Event Grid and also explain the basic’s related to its working.

What’s new  –

  • Service Bus as an Event Handler (preview)
  • 1MB Events Support (preview)
  • IOT Hubs device telemetry events (preview)
  • GeoDR (GA, Generally available)
  • Advanced Filters (GA)
  • Events Domain (Bundling of topics)
    • 100,000 topics per Event Domain
    • 100 Event domain per Azure subscription

He also presented a Case Study on Azure Service Notification’s.

What’s Next ?

  • Remove work arounds
  • Greater transparency ( proper diagnose/debug functionality)
  • Cloudevent.io

The session ended with a Quick demo on event grid.

10:45 Hacking Logic Apps – Derek LI , Program Manager – Microsoft | Shae Hurst, Engineer – Logic Apps Microsoft

Derek Li started the session by saying whatever that will be presented in the slides will be something audience would have never seen before. The session with announcing a new features in Logic App called =  Inline code ( public preview) Shae Hurst showed a quick demo how we can use inline code in our Logic apps and how easy it is to implement it. Demo – Extract list of email addresses using a regex code (JavaScript)

Execute JavaScript Code (public preview). More Language to come in near future.

VS Code for Logic Apps  –

  • Create or Add existing logic App
  • Automatic creation of ARM deployment template
  • Azure DevOps integration

Tips of Trade

  • Sliding window trigger
  • Run against older version

He also suggested to when should we go for Inline Code Vs Azure Functions – If you code takes longer execution time, more than 5 seconds use Azure Functions.

He also shared few tips on how to avoid 429s (throttling)

  • Use Singleton Logic App to call connector to avoid parallel branches/instances fighting over rate limits
  • Use different connectors per action/Logic App
  • Use multiple connections per action

What’s on Derek Mind ?

  • More Inline & VsCode
  • New Designer
  • Better Token picker
  • Emojis generator connector

11:30 API Management: New Developer portal – Mike Budzynski, Product Manager Microsoft

This was the session where he shared the news about new developer portal which will available for all the users on coming Wednesday (12th June 2019). He gave a quick intro about the portal which is either used by the API Providers (design, content editors etc) or the API consumers (app developers).

Some key points to take note off –

  • They Portal is built from scratch
  • Technology used – JAM stack ( JavaScript, API’s, markup)
  • It has a modern look and feel
  • Its Open source and DevOps friendly

He also showed a quick demo of the new developer portal. The Look and feel resembled with the Integrate 2019 website.

You can get more information related to the developer portal on the link below https://github.com/Azure/api-management-developer-portal

12:00 Lunch

13:00 Making Azure Integration Services Real – Matthew Farmer, Senior Program Manager – Microsoft

By 2022 65% of the organizations will move to Hybrid Integrations. There are 4 different integration scenarios 

  • Application to Application
  • Business to Business
  • SaaS
  • IOT

Each of them have different integration challenges and he quoted few of them which we may face working with these integrations. ( Different Interfaces, Cloud or On-prem, Service oriented, distributed etc).

IPaaS – Below are the 4 key integration components for building a solution

  • API’s
  • Workflows
  • Messages
  • Events

He also showed a quick demo related to Processing Orders and also showed how the all integrate and work together.

https://aka.ms/aisarch (Basic Enterprise integration on Azure)

Free Whitepaper (Azure Integration Services)

https://aka.ms/integrationpaper

Few Slides where he spoke about BizTalk to Azure Integration Services

  • There isn’t 1-1 mapping
  • Adopting cloud paradigm requires a different approach
  • Many new concepts to take advantage of
    • Connectors
    • API Economy
    • Serverless
    • Reactive code
  • Many Assets can be transformed from BizTalk to Logic Apps easily 
    • Schemas
    • Maps
    • EDI Agreements

All of the above can easily be moved to Log App Integration Account

  • Orchestration and Pipelines can be remodeled in Logic Apps

Tricky things that is hard to move,

  • BizTalk implementations with huge code bases
  • Lots of rules engine (sometimes)

Buy in to the vision of Azure Integration Services

  • API Economy
  • Logic Apps as application ‘glue’
  • Serverless – or dedicated look at integration services
  • Pay as you use

Identify use case, Design a target architecture and Create a migration plan.

Making it Real

  • Understand the principles
  • Strategy over tactics, value over cost
  • Build a migration plan
  • Don’t under govern
  • Don’t over govern
  • Think about culture change

13:40 Azure Logic Apps Vs Microsoft Flow, why not both ? Kent Weare, Microsoft MVP – Business Application

The session started off with explanation with regards to Microsoft Flow features.

  • ISaaS (Integration Software as a Service)
  • Azure Subscription not required
  • License entitlement available through Dynamics 365 and Office 365
    • Additional standalone license available
  • Part of power platform (Power Apps, Power BI, Microsoft Flow)
    • Deep Integration with PowerApps
  • Over 275+ available connectors.
  • Custom Connectors, Standard and Premium (P1 required)
  • Cloud and On-Premise
  • Approvals
    • Authenticated
    • Tracked in Common data service
    • Custom Approval options
    • Can respond from
      • Flow Approval center
      • Email
      • Flow Mobile application
      • Microsoft Teams
    • Graduate flows to Logic Apps with few conditionals apply

Later he jumped to describe some features for Logic Apps

  • IPaaS (Integration Platform as a Service)
  • Azure Subscription required
  • Consumption based billing
  • Part of Azure Integration Services platform (API Management, Service Bus, Event Grid)
  • Over 275 + available connectors
  • Around 95% symmetric between Flows and Logic Apps
  • Enterprise connectors ( SAP, IBM MQ, AS2, EDI EDIFACT)
  • 3rd party custom connectors + custom connectors
  • ISE – Vnet support
  • Cloud and On-premise
  • Editing Experience
    • Web Browser
    • Visual Studio 2017/2019
      • Azure Logic App tools for Visual Studio 2017/2019
      • Enterprise Integration Pack
    • Visual Code
    • Continuous Integration / Continuous Deployment
      • Azure DevOps
    • Integration Pack
      • Integration Accounts
      • Typed Schemas
      • Flat File encoding/decoding
      • XML Transformation
        • BizTalk tooling
      • JSON Transformation
        • Liquid
      • Third Party management
        • Partners and Agreements
    • ISE (Integration Service Enviornment)
      • VNet connectivity
      • Private static outbound IPs
      • Custom inbound domain names
      • Dedicated compute & Isolated storage
      • Scale In/Out capabilities
      • Flat Cost – whether you use it or not
      • Extended limits
    • Monitoring
      • Webhook integration with Logic Apps for event orchestration
      • 3rd Party support for Serverless360

As we can see there are lot of shared capabilities between Microsoft Flows and Azure Logic Apps. There are also some subtle differences between the two, but they can play a significant role in determining which is the best tool for the job. Ultimately, tooling should be selected based upon Organization design and the complexity of the requirements.

The Winner is …

  • The Organization which leverages both tool to address their needs
  • Innovation doesn’t happen while waiting in line
  • Implement governance and education that allows your business to scale

He later shared a link to his blog post  along with links to Middleware Friday and Serverless Notes

http://www.integrationusergroup.com/middleware-friday

https://www.serverlessnotes.com/

14:20 Your Azure Serverless Applications Management and Monitoring simplified using Serverless360 – Sarvana Kumar, Founder – Serverless360 / Microsoft MVP – Azure

He started off by sharing an Agenda for his presentation

  • Management of your serverless Apps
  • Improving DevOps for your serverless Apps
  • End-to-End tracking for your serverless Apps
  • Customer Scenarios

What is Serverless App ? Similar to LEGO blocks

Azure Logic Apps, Azure Functions, Azure APIM, Azure Service Bus, Azure Relays, Azure Event Grid, Azure Event Hub, Azure Storage – Queue, table, files, Azure Web Apps, Azure SQL Database

He later showed few examples of Serverless Apps along with what problems we face while managing these Apps

  • No Visibility and hard to manage
  • Complex to diagnose and troubleshoot
  • Hard to secure and monitor 

We see similar problem in modern application which he explained by making us understand the Lifecycle of BizTalk Application. With great power comes great responsibilities, what are the best solution to manage your serverless apps ?

  • Composite Applications and Hierarchical Grouping
  • Service Map to understand the architecture
  • Security and Monitoring under the context of Application

He later should and navigate through product Serverless360

Devops Improvements

  • Templated entity creation
  • Auto process left over messages
  • Auto process dead letter messages
  • Remove storage blobs on condition
  • Replicate QA to Production
  • Detect and Auto correct entity states

BAM (End-to-End Tracking) was also demonstrated during the session. He also showed how we can use the functionality and use the serverless360 connector in our Logic Apps for tracking purpose.

The session end with 3 different customer scenarios who are currently using Serverless360. You can book a demo to know more about the product on – https://www.serverless360.com

15:30 Monitoring Cloud and Hybrid Integration Solution Challenges Steef-Jan Wiggers, Microsoft MVP – Azure

He started the session by speaking Cloud native integration solutions – AI. He also spoke about the challenges we faced while creating  either a hybrid solution or cloud solutions. He also showcased a real time scenario related to Hybrid Integration.

He also spoke about different types of Monitoring and challenges faced.

  • Health Monitoring
  • Availability Monitoring
  • Performance Monitoring
  • Security Monitoring
  • SLA Monitoring
  • Auditing
  • Usage Monitoring
  • Application Logs
  • Business Monitoring
  • Reporting

How users can use different Azure Artifacts to perform Monitoring activity ( Azure Monitor, Log Analytics, Application Insights,  Power BI , configure Alerts.

How can people build an effective hybrid integration scenarios ( Training, Hands-on-labs, Learn through mistakes, Knowledge base, Forums, Guidance, Mentoring, Exams (AZ-103, AZ-900). When the Solution is in place, you must have a strong Support Model and well define processes.

SUPPORTABILITY MATRIX

SKILL MATRIX

There are different products available in market and based on the requirement select the right and perfect tools which suites your need. Few tools are,

  • Serverless360
  • Biztalk360
  • Atomic Scope
  • Invictus Framework (BizTalk, AZURE)
  • AIMS
  • NewRelic
  • DynaTrace and the list goes on…

There are also different ways of Monitoring your Services in Azure. He also made a walkthrough of Cloud Native Solution – Order Processing revisited and gave inputs on which Products or Services to be used.

Finally he ended the session by share few Resources for learning.

  • Azure Administrator AZ-103
  • Azure Cost Management
  • Microsoft Azure Monitoring
  • Codit Invictus for Azure and BizTalk
  • Codit D365 Whitepaper
  • Serverless360 Blog
  • ServerlessNotes
  • MiddlewareFriday

16:10 Modernizing Integrations – Richard Seroter, Microsoft MVP Azure

He started the session by going back to year 2009 where he wrote a book called – SOA Patterns with BizTalk Server 2009.

Modernization is a spectrum

Various tools which follows this spectrum include BizTalk, SSIS and Azure Service Bus. Later he spoke about few concepts related to integration and what would have been his take based on below

  1. My advice in 2009
  2. My advice in 2019
  3. Benefits of 2019 advice
  4. Risks with 2019 advice 

Content based routing

  1. BizTalk Server with send port subscriptions
  2. Use BizTalk Server on-premise and Service Bus and Logic Apps for cloud based routing
  3. You message engine is scalable and flexible
  4. Explicit property promotion needed for Service Bus or you need Logic Apps to parse the messages. Cloud based rules are not centralized

Later he shared links to some blogs 

De-Batching from a database

  1. Configure in the BizTalk SQL Adapter and de-batch payload in receive pipeline
  2. For bulk data, de-batch in a Logic App. Switch to real-time, event-driven change feeds where possible
  3. With change feeds, process data faster, with less engine based magic
  4. De-batching requires orchestration (LogicApps) versus pipeline-based de-batching. Can be a more manual setup.

This was a moment when my Blog was shared and it made my Day !

Stateful Workflow with correlation

  1. Use Orchestration and take advantage of dehydration, correlation and transaction with compensation
  2. Use durable functions for long running sequence along with Logic Apps and Service Bus apart giant orchestrations into choreographed sequences
  3. Easier for any Developers to build workflow
  4. You may come across limits in how long a workflow can “wait” and there is centralized coordination and observability

Complex data transformation

  1. Use the BizTalk mapper in transformation data structures and take advantage of functoids and inline code
  2. Map data on the way out of it all, and use Liquid templates for transformation but not business logic. Also consider transforming in code (functions)
  3. Avoid embedded too much brittle logic within a map and leave it up to receivers to handle data structures changes
  4. Not suitable for flat files or extremely difficult transformations. Put new responsibilities on client consumers

Integration with cloud endpoints

  1. Call cloud endpoint using http adapter and custom pipeline components for credentials or special formatting
  2. Use Logic Apps and connectors for integration with public cloud services. Use Logic Apps adapter for BizTalk where needed
  3. Any developer can integrate with cloud endpoints and you have more maintainable integrations
  4. More components from more platforms participating in an integration

Strangling your legacy ESB

  1. Put new Integrations into the new system, and rebuid existing ones over time
  2. Similar to 2009, but avoid modernizing to a single environment or instance. Use Event storming to find seams to crave out
  3. Get into managed systems that offload operational cost and are inviting to more developers
  4. You will have a lengthy period of dual costs and skillsets

Getting Integrations into Productions

  1. Package up BizTalk assemblies, libraries, scripts and policies into MSI and deploy carefully.
  2. Put On-premise and Cloud Apps onto continuous integration and delivery pipelines. Aim for Zero downtime deploys
  3. Reduce downtime, improve delivery velocity and reliability. Introduce automation that replace human intervention
  4. Complicated to setup with multi-component integrations. Risk of data loss or ordering anomalies when upgrade rollouts.

Building Integrations Team

  1. Invest in training and building center of excellence
  2. Integration experts should coach and mentor developers who use variety of platform to connect systems together
  3. Fewer bottlenecks waiting for Integration team to engage and more types of simple integration get deployed
  4. More distributed ownership and less visibility into all integrations within the company

16:50 Cloud Architecture Recipes for the Enterprise – Eldert Grootenboer, Microsoft MVP Azure 

The final session for the day, Eldert started by explaining how on-prem infrastructure has to be managed by the enterprise as compared to the cloud IaaS you don’t have to worry about the infrastructure (patches, OS updates etc) all will be taken care by the Microsoft once your environment is in place. He also discussed how serverless can be useful where your primary focus is on what business needs and nothing else.

  • What is the right size of server for my business needs ?
  • How can I increase server utilization ?
  • How many servers do I need ?
  • How can I scale my app ?

He also suggested before we implement any solution we must draw a line and set guidelines.

  • What must be a desired architecture 
  • Have a proper involvement from all the teams (Business units, architect and everyone involved) so that all are on the same page.
  • Look at various options available and utilize it rather than building your own. In case you do go that path try to use PaaS offering.
  • Other key things to look our for included 
    • Event Driven approach
    • Scalability
    • Loosely coupled solution
    • Integration patterns
    • DevOps strategy
    • Middleware
  • Look out for something what suites your needs and do buy something which is hyped or attractive
  • Your environment must be Secure and monitored.

Explore Azure components while deciding on Architecture (Serverless – LogicApps, Event Grid, Functions ) Containers and AppInsights for monitoring purposes. There are endless possibilities to choose from. He also shared few of his customer experiences.

Cloud native start from PaaS  and DevOps is also something to think of.  Few take away from the session –

  • Understand and make sure all the scenarios are captured
  • Have a good governance and security model
  • Look out for endless possibilities available out there
  • After understanding your needs consider a cost effective method.

You can check more insights on Day 3 here.

INTEGRATE 2019 – Day 1 | byBrick Development

I would like to pen down my experience, updates and take away for this year’s integrate 2019 (London). Being the 1st timer there was so much to gain from Integrate 2019. I would be dividing this article into span of 3 days.

The Event was organized at etc. Venues, London 3rd – 5th June 2019. As soon as you enter the event there was a big board which said – WELCOME INTEGRATE 2019. The Registration started at 07:30 and it was pretty well organized. You were handed over your Integrate attendee batch with a bag which had a book pen and few pamphlets along with the Agenda for the next 3 days. Breakfast and Lunch was provide each day during the Integrate 2019.

Registration at Integrate 2019

There were also Booths from the sponsors & organizers – ( BizTalk360, Serverless360, Quibiq, Codit, Hubfly ).

Sponsors at Integrate 2019

The Event was almost houseful with around 500 attendees. To talk about the speakers each one had a different way of presenting and I thoroughly enjoyed each and everyone of them. Though I would not lie, after Lunch break at times it became a bit hard to concentrate but somehow managed to get over it.

Speakers at Integrate 2019

Monday 3rd June 2019

08:45 – Integrate 2019 – Welcome Speech Saravana Kumar, Founder/CEO Biztalk 360, Serverless360, Atomic Scope.

Being the 8th Anniversary of Integrate, a welcome speech was presented by Sarvanna where he welcomed all the attendees, speakers and sponsors. He introduced their new identity named “Kovai.co” – https://www.kovai.co with a quick info about their products (BizTalk360, Serverless360, Atomic Scope). He also gave inputs related to  Integration Monday and Middleware Friday which is run by the Microsoft community. Also few links where you can explore and learn more about Azure. You can also use the hashtag #integrate2019 to see inputs from various attendees, speaker and organizers.

https://www.serverlessnotes.com/

https://www.integration-playbook.io/

09:00 – Keynote, Jon Fancey, Group Principal PM Manager – Microsoft.

A speech “Beyond Integration” was presented by Jon Fancey. He spoke about what was their Vison 2015 and how they wanted to be the ruler in iPaaS space. The iPaaS platform today has more than 300+ connectors ( Flows, LogicApps). It was year 2017 when Microsoft iPaaS offering was listed in Gartner list and continues this year as well. If I heard it correctly it was quoted as leader of 2018 by Gartner. He quoted Gartner – “The more you innovate, the more you need to Integrate”. Later he invited 3 customers who use Microsoft platform for their Integration transformation.

09:10 – Ramak Robinson, Area Architect Integration at H&M

She spoke about its Vision – “Most Loved Design in the World”. She also spoke about its technology ambition, Integration Competence center at H&M and their cloud transition journey. She also presented a case study which they are building in group along with Microsoft – Digital Receipts. 

09:20 – Daniel Ferreira, Sr. Cyber-Security Data Scientist, Shell

He spoke about the operational challenges and how they are using API integration to enable their Cyber security operations. He also provide a quick demo on chat bot functionality.

 09:40 – Vibhor Mathur, Lead Architect, Anglo-Gulf Trade Bank

He spoke about how they were on a mission to build the 1st  Digital trade finance bank in only 6 months. He spoke about various Microsoft offerings they used like LogicApps, Service Bus, API Management, Azure AD to achieve their target. They waned to build an architecture which was lean, secure, easy to upgrade and highly available. Where they able to setup a digital bank in a span of 6 months ? The Answer is YES, but it was 18 days more than expected, impressive indeed (6 months, 18 days).

All the above 3 industry ( Energy \ Retail \ Finance ) uses Microsoft as their Cloud Platform.

10:00 – Logic Apps Update – Kevin Lam, Principal Program Manager

He shared some interesting updates related to logic app’s. In Last 6 months there  has been 38 new connectors added in Logic Apps – (Microsoft + 3rd Party). Logic Apps is growing and it also allows you to create your own custom connectors and publish it to market place. He gave an overview about how to Integrate using Logic Apps which included – Orchestration, Message Handling, Monitoring, Security etc.

You can use Visual Studio 2019 to deploy your Logic App direct to Azure Portal. You can also use Visual studio code to deploy your Logic App via Arm template. He also later spoke about the ISE Architecture ( Integrated Service Environment ), ISE Deployment model and ISE roadmap.

11:15 – API Management Update – Vlad Vinogradsky – Product Leader, Microsoft

He started off with what we can use and what are they working on in the field of API management. He gave an overview of API management and spoke about recent developments related to API management along with Demo on Hosting kubernetes on your machine.

  • Manage Identities – Authenticate/Authorize your service. Enable it so that you can authenticate it with your backend.
  • Policies – A support towards encrypted documents that can perform simple and advance validations.
  • Protocol settings. Enable TLS
  • Bring your own cache (Redis compatible)
  • Subscriptions – Enable tracing on Keys
  • Observability –  Set same configuration settings for both Azure Monitor and App Insights, preserve resources by turning on sampling, enable/disable settings for whole API tenant, you can also specify additional headers to log etc.
  • Function + API management – Import function and push function as API
  • Consumption tier – GA last week ( billed per execution and can scale down to zero when there is no traffic ).
  • DevOps resource tool kit – You can have a view at it on GitHub

Future –

  • He told us a bit about the New Developer poral, more about it was revealed on 4th June 2019.
  • With API Management being cloud only service and most of the customers uses both cloud and on-prem (Hybrid) environments. Microsoft will soon be launching Self-hosted API management gateway allowing you to deploy your gateway component  on-prem. (Launching late summer or early fall).

Also link was shared across for all the API lovers, Azure API Management resources  – https://aka.ms/apimlove

12:00 – The Value of Hybrid Integration – Paul Larsen, Principal Program Manager, Microsoft

This was the session which most of the BizTalk developers eagerly waiting for, This year end 2019 a new BizTalk 2020 will be launched. You can refer to the following to know more about its latest features. https://azurebiztalkread.wordpress.com/2019/06/03/integrate-2019-biztalk-server-2020/

He also shared information related to API Managements, Logic Apps, Service Bus and Event Grid. Later he  announced  a new connector for Logic Apps IBM 3270 (preview). A Hybrid demo related to – Integrate IBM mainframe program with Azure Cloud was also showcased using Logic Apps 3270 connector and later future Roadmap.

12:30 – Lunch Break

13:30 – Event Hubs update Dan Rosanova, Group Principal Program Manager, Microsoft

He started with introduction on Event Hubs (PaaS) offering along with messaging pattern – Queue. He later explained different protocols like Kafka, Http and AMQP and how Event Hubs and Kafka  differ from queues. Explained Kafka / Event Hubs conceptual architecture along with What Microsoft has to offer in Event Hubs for Apache Kafka.

Four offering of Kafka in Azure

  • Clustered offering
  • PaaS service
  • Marketplace offering
  • DIY with IIS

Later he showed stats related to Before and After load balancing algorithm improvements.

Finally the session ended with a summary –

  • There is no scale you need that we cannot do
  • The most available messaging platform in any cloud
  • Extremely affordable

14:00 – Service Bus Update – Ashish Chhabria, Program manager, Microsoft

He spoke about the High Availability + Disaster recovery (50 regions) and Geo-Disaster Recovery available for Premium namespaces.

3 weeks back a feature called In-place upgrade was introduced where your standard namespace can be migrated to premium namespace. He also provided a quick demo on how to migrate standard namespace to premium namespace.

Enterprise features announced included,

  • VNET Service Endpoints and Firewalls (where users can limit access to your namespace from specific VNET or IP).
  • Managed Service Identity & RBAC (preview)

For .Net and Java SDKs they added support with regards to Management support, Web Socket and Transaction support.

SDK related to Python will be available soon.  

Also inputs related to –

  • Logic App connector for Service Bus
  • Service Bus Queue as event handler (preview)
  • Data Explorer for Service Bus will be out soon.

14:30 – How Microsoft IT does Integration – Mike Bizub, Microsoft CSE&O

He started the session with a recap on B2B approach and how customers dealing with larger volume of data are moving from BizTalk Server to PaaS offerings.  Logic Apps to support X12, EDIFACT were created.

Telemetry

  • Hot-Path
  • Warm-Path

He also spoke how CI/CD pipelines, deployments, Code repository is managed using ALM and DevOps. Test related to Unit and Functional test along with defining of policies for code review and security compliance.

Security and Governance is important and its very critical how your metadata is managed. (SAS tokens, Managed identity, secrets etc).

14:50 – AIS Migration Story – Vignesh Sukumar, SE Core Engineering Service team, Microsoft

He started with a 20 seconds quick wake up activity and spoke about the Metadata driven Architecture along with Migration Accelerators (TPM tool) to migrate BizTalk to PaaS architecture. These Accelerators can be used to migrate artifacts like schemas, trading partners, orchestration in a click. This will reduced the time of migration from days to few hours (approx. 3 hours per transactions).

He also provided inputs related to EAI / Disaster recovery – How it can be important for high availability.

You can access Tools and scripts from the below URL. https://github.com/vdhana257/EnterpriseIntegration

15:30 – Enterprise Integration using Logic Apps – Divya, Swarnkar, Senior Program Manager, Microsoft

She started her presentation using a scenario explaining the current state of Contoso Grocery Store. How can Azure be effective to track down the wasted units when the storage malfunctions.

Installation of IOT sensors to the storage units, equipped their staff to receive notifications.

Logic App IOT trigger > Message Transform > Notify Store team > Create Maintenance order SAP

SAP Send equipment change request > Message transform > Create workorder in D365 > Notify Maintenance team

New Improvements 

  • Integration Account (Standard)
    • Limits for EDI artifacts raised to 1000
  • AS2 V2
    • Is core action – more performant, no limit on timeout
  • Monitoring
    • Batch trigger/action – monitor batch activity, release criteria, correlate items in batch, sources run of resubmitted run.

Announcing Today

  • Rosetta Net connector Logic App (Public Preview). She also showed some quick demo related to it.

Coming really Soon

  • Data Gateway across subscriptions

Future Roadmap

Extended support for Health Care, additional connectors, integration accounts etc.

  • Industry Verticals
    • Business Verticals – healthcare (HL7)
    • Connectors – Oracle EBS, Netsuite, ODBC
    • Connector marketplace
  • Configuration store
  • Integration account – dev SKU, DR
  • Monitoring
    • Azure AppInsights support for Logic Apps.

16:00 – Serverless Stories and Real Use Cases – Thiago Almeida, Microsoft

The main focus or Agenda for his session was Serverless Integration.

  • What is meant by Serverless Integration
  • The file batch challenge
  • Python function use cases
  • Storage stats tracking
  • Durable functions
  • IOT
  • Azure Integration Services

He started with what is meant by Serverless integration and what are the proposed solutions for solving the file batch challenge to append files in same batch in a single JSON file. He also showed few use cases for their customer – Storage Stats tracking (scenario), escalation workflow, durable function workflow – Fujifilm.

A purchase order scenario can be solved in combination with Logic Apps and Service Bus. PaaS + SaaS can be utilized for

  • On-prem connectivity
  • Workflow automation
  • API Management etc.

16:45 – Microsoft Flow Sarah Critchley / Kent Weare, Microsoft MVP – Business Applications

The main focus of the session was Microsoft flow and how organizations can automate their workflows without writing a single line of code. Microsoft flow and power apps bring new extensibility scenarios.

They also specified some Key capabilities of Microsoft flow –

  • Built in Approval Centre
  • Data Loss prevention policies
  • Business process flow
  • Geo triggering
  • Review and start flow from mobile
  • Package Flow apps, entities & dashboards to move between environments.

Microsoft Flow mobile and Devices allows users to work less and perform activities from anywhere. Create New flows, Get push notifications, Discover buttons, Use button widgets, Monitor flow activities, grant approvals.

Sarah spoked about Dynamic365 and how it’s used in department likes Sales, Marketing & Finance. She also spoke about PaaS offering from Microsoft – PowerApps.  The session finally wrapped up with a slide focusing on Key takeaways.

17:30 The final session was cancelled and awards were given by BizTalk360 to their valuable customers. The day ended finally with – Networking, Reception and some beers.

You can check more insights on Day 2 here.

byBrick Development IoT Conference 2018 on Tynningsö

Update: for some reason this post was un-published, so re-posting it again..

Wow I say!! Just wow! It was one helluva awesome weekend!

On Friday the 19th of October we all filed away to Tynningsö, located in the beautiful archipelago of Stockholm (Waxholm), for a weekend of IoT, socialising and insights.

We had rented a very nice house on the island and 10 of us headed off.

jhdr

The cabin we hired in Tynningsö

The agenda was:

  • Friday night
    • Prep. approx. 10 Raspberry Pi 3 B+
    • Eat and be merry
    • Future Planning and Presentation(s)
    • Group work – more prep of the Pi
  • Saturday
    • 4hrs “intro” to Azure IoT from a trainer
    • Group work – think value, IoT
    • Dinner
    • Group work – spent time well past midnight
  • Sunday
    • Group work – presentation, final touches
    • Solution presentation
    • Home

What was so cool about this weekend, was the intense engagement that was across the board. We split up into two teams and gave everybody free reign on the sensors, touch screens et al that was brought along with us. The aim was to learn a bit more about IoT, get some hands-on experience with the newest cloud trends and focus on practical applications.

We had a trainer from 1337, Mats Tornberg, who was incredibly enthusiastic giving us the intro to Azure IoT services and set us off on our own path.

Azure Event Grid

Azure Event Grid is a platform as a service offering (PaaS) which is an event routing mechanism that helps you to subscribe to events.

There can be different modes of communication where message can be transmitted from one party to other.

  • One-way
  • Bi-Directional
  • Push and Pull mechanism etc

Azure offers various messaging service like –

  • Notification Hubs – where you can Push mobile notifications
  • Logic Apps – which helps you to Schedule, build and automate processes (Workflows)
  • Azure Service Bus – Exchanging information between 2 parties.
    • Topics, Queues and Relays
  • Azure Event Hub – Also part of service bus namespace. One-way event processing system that can take millions of events per second and can be used for analysis and storage
  • IoT Hub
  • Azure Event Grid – Based on publish\subscribe model. Easy to relate if you have been working with Microsoft BizTalk Server.

Now you may wonder which once to use from Azure Event Grid, Azure Event Hubs and Azure Service Bus?  – In case where you need to process order or do some financial transactions you must go with Azure Service Bus. While on the other hand when you need to stream large volume of data like telemetry data, event logs i.e. Messaging at scale you must consider using Azure Event Hub. Azure Event Grid can be used when you want to react on an event.

Azure Event Grid is based on Publish/Subscribe model when there can be a single publisher but multiple subscriber that subscribes to those events.

event Grid .jpg

Event Grid Flow

Event occurs within a Publisher and it pushes those events to Event Grid Topic. Subscribers are listening to that topic. Event Handlers are the one responsible for handling those events.

In the below Example we have created an Event Subscription which is subscribing to Event Grid Topics (Publisher).

Topic Type can be any from the following –

  • Event Hubs Namespace
  • Storage Accounts
  • Azure Subscriptions
  • Resource Groups
  • Event Grid Topics (Used for Demo)

EventGrid2.JPG

We have used Postman which is doing a POST request to Azure Event Grid Topic endpoint. This topic will be subscribed by an Event Subscription and the messages will be sent to subscriber Endpoint i.e Request Bin in this case.

POSTMAN

AzureEG_IN

REQUEST BIN 

AzureEG_OUT

Azure Event Grid ensures reliability and performance for your apps. You can manage all your events in once place and lastly you just need to pay per event.

 

 

Benchmarking Applications with BenchmarkDotNet – Introduction

TL;DR

BenchmarkDotNet is a Library that enable Developers to define Performance Tests for Applications, it abstracts the complexity from the Developer and allows a degree of extensibility and customisation through its API. Developers can get started pretty quick and refer to is documentation for Advanced Features.

This is not a post to actually perform benchmarking but rather introduce BenchmarkDotNet to Developers. Sources used are available in Github.

 

Introduction

Typically, when we develop a piece of software some degree of testing and measuring is warranted, the level of it depends on the complexity of what we are developing, desired coverage, etc. Unit or Integration Tests are by default part of any project most of the time and in the mind if any software developer, however, when it comes with benchmarking things tend to be a little different.

Now, as a very personal opinion, benchmarking and testing is not the same, the goal of testing is functionality, comparing expected vs actual results, this being a very particular aspects of Unit Tests, we also have Integration Tests which present different characteristics and goals.

Benchmarking on the other hand is about measuring execution, we will likely establish a Baseline that we can compare against, at least once. there are things we’d be interested in like execution time, memory allocation, among other performance counters. Accomplishing this can be challenging, but most importantly, to do it right. so here is where BenchmarkDotNet comes into the play, like with other aspects and problems of when developing software, it is a library that abstracts various aspects from the Developer to define, maintain and leverage Performance Tests.

Some facts about BenchmarkDotNet

  • Part of the .NET Foundation.
  • Runtimes supported .NET Framework (4.6.x+), .NET Core (1.1+), Mono.
  • OS supported: Windows, macOS, Linux

 

Benchmarking our Code

While it is unlikely that every piece of code in the system will need to be Benchmarked, there might be scenarios where we are able to identify modules that are part of the a critical path in our System and given their nature they might subject to benchmarking, what to measure and how intensive would vary on a case-by-case basis.

Whether we have expectations, we need to ensure that critical modules in a System are performing at it most optimal point if possible or at least acceptable so that we can establish a baseline that we can used to compare against as we do gradual and incremental improvements. We need to define a measurable, quantifiable truth for performance indicators.

Usage of a Library

Reasons we have many and are applicable to practically any library, favour reusability, avoid reinventing the wheel but most importantly is about relying on some that is heavily tested and proven to work, in this particular case we care about how accurate the results are, and that depends on the approach, we have all seen examples out there where modules like Stopwatch is used and while it is not entirely bad, it is unlikely it will ever provide the same accuracy BenchmarkDotNet provides nor the flexibility or extensibility, to mention some features BenchmarkDotNet provides:

  • Allows the Developer to target multiple runtimes through Jobs, for instance various version of the .NET Framework and .NET Core, this is instrumental to prevent extrapolating results.
  • Generation of reports in various formats that can be analysed.
  • Provides execution isolation to ensure running conditions are optimal.
  • Takes care of aspects like performing several iterations of execution and warm-up.

More information about the actual flow can be found in the How it works section in the Documentation.

Read More

Event sourcing as an evolutionary architectural pattern

In software, the only thing that is constant is change. And software architecture is no different, it has to evolve. Simply put, system components should be organized in such a way that they support constant change without any downtime, Even if all the components are changed or completely replaced, the show must go on.

Software architecture has to be technology agnostic, Resilient, and designed for the incremental change.

pexels-photo-194094

If we think about the Theseus paradox, Evolutionary architectures care less about the sameness of the ship(s) and focus more whether the ship would keep on sailing. In short, architecture has to be technology agnostic, resilient and designed for the incremental change.

We can call any system “evolutionary” if it is designed for incremental change and can evolve with the changing nature of the business. Microservices style architecture is considered evolutionary because each service is decoupled from all other services in a system and communication between services is completely technology agnostic, therefore replacing one microservice with another is rather easy.

Microservices architecture focuses more on the technology neutral communication between the components, loose coupling, versioning, automated deployment etc. but it doesn’t say anything about how each service or a complete system should store data internally.

Event sourcing (ES) pattern can serve as the missing piece in the puzzle, by defining data storage strategy.

Butchering the data is a criminal offense

In many businesses, modifying or changing the data is a crime, for example, Banking, forensic, medical, law etc. Remember Accountants don’t use erasers. Whether we like it or not, for certain businesses like Telecom and ISP retaining data for a period of time is compulsory.

Can you imagine CCTV systems without recording? what will happen if something happens and you have no record of it?

DrMikeAamodt.jpg

Most serial killers kill for enjoyment or financial gains.

In the early 1990s, Dr. Mike Aamodt started a research about serial killers behaviors. It took him and his students decades to study and analyze public records before they could answer why and how serial killer kills? this wouldn’t have happened without historical records. In order to answer future questions, we must persist everything that is happening today.

Don’t create serial killers of the data.

We all know that the currency of future is data. We don’t know what tools and methods we will have at our disposal in a decade, we certainly can’t predict how important that data could be for our businesses or for us personally, Still, we design and develop applications that are constantly murdering valuable data.

CQRS/ES

We as a species accumulate memories from our experiences, We simply can’t think like a CRUD system. There is a reason we love stories and pretty much everyone hates spoilers, Our brains work like an ES system.

Event sourcing is a way of persisting domain events (the changes that have occurred over time) in your application, that series of events determine the current state of the application. ES and CQRS go hand in hand, We simply can’t talk about ES without mentioning the CQRS (Command and query responsibility segregation). I generally explain CQRS with a notion of one-way and two-way streets or separate paths for pedestrians, cycling, and other traffic.

pexels-photo-210182

let’s see how Greg Young himself explains it.

CQRS is simply the creation of two objects where there was previously only one. The separation occurs based upon whether the methods are a command or a query (the same definition that is used by Meyer in Command and Query Separation: a command is any method that mutates state and a query is any method that returns a value).
—Greg Young, CQRS, Task Based UIs, Event Sourcing agh!

The advantages of using CQRS/ES

  • Complete log of changes
  • Time traveling (replaying events is like storytelling)
  • Easy anomaly and fraud detection (by log comparison)
  • Traceability
  • Death of cannot reproduce or Easy debugging (production data can be debugged in dev environment)
  • Read (with CQRS) and Write performance
  • Scalability (with CQRS)

Common use-cases

Since Event sourcing enables you to audit state changes in your system. Auditors will love you for this. Historical records can be used to perform analysis, and identify interesting patterns. We must consider ES when accountability is critical (e.g Banking, medicine, law).

If your system by nature is Event-driven then using ES is a natural choice (e.g game score tracking, online gaming, stock monitoring, real-estate, vehicle tracking, social media like twitter, facebook, LinkedIn etc).

You should consider this pattern If you need to maintain versions or historical data for example document handling, Wikis etc.

Similarly, after each change, we can replay all the events and inspect the logs to see if expected change is there? and does it work?

Conclusion

Most people think embracing ES is complex, however, I believe in the opposite. I think ES systems are easy to scale and change. All the mature industries like Banking, Finance, Law, and Medicine already use these methods, even before computers were invented.

We simply can’t destroy the data, because we can’t predict what tools will be available in a year from now, and how useful this data would be in future.

In a forthcoming blog, we will show you a production-quality implementation of a system that has the potential to evolve.

If you happen to be in Stockholm and have an irresistible urge to discuss evolutionary architectures such as CQRS/ES lets have a “Fika”.

Sharing is caring

At byBrick Development the culture is focused around the sharing of knowledge.

pkn-sajid

To that effect we run a program called byKnowledge and as part of byKnowledge we have a special session type called Pecha Kucha.

Pecha Kucha Format

The format is very simple and invites 4-5 different presentations from individual members of byBrick Development.

  • Session length is 7 mins in duration
  • Topic can be anything, including non-technical

pkn-hassanThe idea is to showcase and share knowledge in a very succint manner. It is very effective and also allow us to get to know out colleagues a little better.

The byKnowledge, and by extension Pecha Kucha events run monthly for us and we continue to improve on the format and style.

The topics from last night

We had a great range of topics last night, the focus here was more technical in nature, but previously we have even had “Stress Management” on the list.

pkn crew bybrick development

Part of the byBrick Development crew

Benefits of sharing knowledge

We work in an industry where it is nigh on impossible to be aware of every single aspect.
pkn-osman

In order to gain an incremental curve of learning and knowledge, we work through our own experiences and share it with our colleagues.

Aside from attending conferences, meetups and training we gain exponentially by collectively sharing the information we gain.

As the opening stated, it is heavily embedded in our culture and it’s a testament to the people we work with that we get the opportunity to learn from their real life experiences.

CFO asks his CEO: What happens if we invest in developing our people and then they leave the company?

CEO answers: What happens if we don’t, and they stay?

I know!! it’s a meme that’s done the rounds for years now – but honestly, it’s a paradigm which is incredibly accurate in its message.

Cultural Aspects

pkn-touhisaariCreating a culture where focus is on giving our people the ability to change and adapt. It’s not something which is all that common but it’s a strong focus for us.

Luckily our consultants have embraced this and we continued to adopt and embrace new ideas – it is the core of what byBrick is all about.

 

 

BizTalk Management and MessageBox Databases Sync Issue – Part2- UN-Deployment

Second phase of BizTalk Management and MessageBox DB sync issue is Un-Deploying/Deleting application. Fortunately issue was with only 2 applications.

When tried to Delete (either from Cosnole Or BTSTask -Remove App command ) an application in order to deploy a new version,we got an error popping out every-time saying

The service action could not be performed because the service is not registered

The application was visible in the BizTalk Admin Console.

Funny part was; application was running without any issue and all messages subscribed by artifacts of this application were being processed successfully.

This helped me in buying some time to dig out the issue.

From previous experience it was evident that something was missing in the BizTalk Databases and most likely in the BizTalkMsgBoxDb and analyzing further with help of SQL-Profiler and BizTalk Trace we were able to find that Port information was not present in Service table and Application name as not registered in Module table.

Resolution

Take backup of BizTalk Databases (we can rely on SQL jobs for BizTalk DBs 🙂 )

Stop all hosts instances.

Check application artifacts in following tables.   bts_application, bts_receiveport, and bts_sendport and bts_assembly tables in the BizTalkMgmtDb. (If you have other artifact in your application, then you need to check in respective table e.g. Send Port Group, Orchestration etc.

Insert Application Name in Module tables which will (auto)generated Module Id. Use this Id ‘nModuleID’ in service tablle

Insert into [BizTalkMsgBoxDb].[dbo].[Modules] ([nvcName],[dtTimeStamp])
values ('Application.MyApplication', GETDATE())

Find Service Instance IDs (uidGUID) in bts_SendPorts, bts_ReceievPorts i.e. (all possible artifacts tables).

SELECT uidGUID FROM [BizTalkMgmtDb].[dbo].[bts_receiveport] WITH (NOLOCK)
WHERE nvcName =<'your artefact name'>

SELECT uidGUID FROM [BizTalkMgmtDb].[dbo].[bts_sendport] WITH (NOLOCK)
WHERE nvcName =<'your artefact name'>

Then insert Service Instance ID (uidGUID) in Services table with Module ID for all artifacts One by One

I had 2 Receive Ports and 2 Send Ports which led me to make 4 entries in Services table.

Get ModuleID from Modules table for respective application and use in below query

Insert INTO[BizTalkMsgBoxDb].[dbo].[Services]

([uidServiceID],uidServiceClassID,[nModuleID],[fAttributes])

VALUES ('<ServiceIntanceID from artifact table>',Null,<ModuleID>,0)

Finally

Un-Deploy Or Re-Deploy Happily.