The BLOG

BizTalk Management and MessageBox Databases Sync Issue – Part1- Deployment

While using BizTalk Clustered Environment with Availability Groups (Not Failover Cluster), an issue was found while deploying BizTalk Application.

BizTalk Management and MessageBox Databases goes out sync (due to some specific or unknown reason), we get error while deploying an application. (it could be 1 or more applications; in my case it was only one)

Deploying another version of same application in BizTalk Environment creates issue.

It is noticed that there are activation subscriptions left even after Un-deploying the BizTalk application. Its  impossible to deploy the BizTalk application again with Orphan Subscriptions.

Executing  AddApp command, it gives error

Error: Application registration failed because the application already exists

Executing RemoveApp command for same application , it gives error

Error: Application ” M         ” not found in configuration database.

AddAppRemoveApp

This proves there is contradiction in BizTalk Database instances.

Event Log produces generic log.

Unable to communicate with MessageBox BizTalkMsgBoxDb on SQL Instance ABCD11111\MessageBoxInst. Error Code: 0x8004d00e. Possible reasons include:
 1) The MessageBox is unavailable.
2) The network link from this machine to the MessageBox is down.
3) The DTC Configuration on either this local machine or the machine hosting this MessageBox is incorrect

Reason

  1. Possibly, Maximum Degree Of Parallelism for MessageBox Database (MDOP) was  not set to 1. It was grater than 1 during this period of time which have caused database issues when databases are Synched in AG.
  2. Orphan Subscriptions of application.
  3. Orphans of application.

Steps for Tracing the issue

  1. Executed command BTSTask ListApps
  2. Executed command BTSTask ListApp /ApplicationName:”M………”
  3. Check Test DTC settings.

No trace of application found.

Then the last option left is to dig in BizTalk Databases.

  1. Run SQL profiler  on MessageBox and Management databases while executing BTSTask commands and collect the trace log.
  2. Run BizTalk Trace using “PSSDiagForBizTalk” while executing BTSTask commands

Try to search for application name in both trace logs and identify stored procedure or process creating issue.

Resolution

This issue appears in MessageBox database due Orphans of Application. Execution of one stored proc i.e. bts_AdminAddModule got interrupted during deployment and application was not deleted from Modules and Services tables. Modules and Services tables in MessageBox are mainly responsible for subscriptions.

Note:- Do Not forget to stop Hosts while performing below activity.

  1. Delete Orphan Subscriptions using subscription IDs. (Using SQL Query or a tool you prefer)
  2. Run below SQL query and get nModuleID of application creating issue.            SELECT  *  FROM [BizTalkMsgBoxDb].[dbo].[Modules] WITH (NOLOCK)
  3. Check for Instance in Application and Instances tables for nModuleID. Query result must be empty.
  4. Run below queries                                                                                                            BEGIN TRANSACTIONDELETE FROM [BizTalkMsgBoxDb].[dbo].[Services] WHERE nModuleID = 111DELETE FROM [BizTalkMsgBoxDb].[dbo].[Modules] WHERE nModuleID = 111ROLLBACK TRANSACTION

 

 

 

Create SQL database on Azure using PowerShell and access from on Premises Microsoft SQL Server Management Studio

SQL Azure is cloud database-as-a-service provided by Microsoft cloud. The best part is that the data is hosted and managed by Microsoft data centers. We can build our applications on premises and move our data to cloud.

Azure SQL model provides:

  • Elastic database pools – Allows you to provisioned 1000’s of database table as you need. Grow and shrink based on your requirement.
  • Azure hosted database
  • Pay for only what you use (no of plans available)
  • Auto-scale
  • Also enable Geo-replication for disaster recovery
  • Reduces hardware cost and maintenance

In this example, we would be creating an SQL database on Azure Cloud and try to access SQL Server from on-premises SQL server management studio. We would be implementing this by using the PowerShell command and later configuring firewall rules that allow access from external parties.

Pre-requisites

Once we are ready with the PowerShell modules. Open Windows PowerShell ISE (x86) as an administrator.

  • Login to your Azure portal using the below command. 
Login-AzureRmAccount

AzureLogin

Apply necessary login details and click OK. After successful login,

  • Lets create a Resource Group.
$ location = "west tour" 
$ resourceGroup = "mysqlserverRG" 
New-AzureRmResourceGroup -ResourceGroupName $ resourceGroup -Location $ location
  • Let’s create a Logical SQL Server. We would be using “New-AzureRmSqlServer” command. A Logical SQL server contains a group of databases managed as a group.
$ servername = "myblogazuresqlserver" 
$ username = "mypersonallogin" 
$ password = "xxxxxxx"
New-AzureRmSqlServer -ResourceGroupName $resourcegroupname -ServerName $ serverName -SllAdministratorCredentials $ ( New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $ username , $ ( ConvertTo-SecureString -String $ password -AsPlainText -Force ))

Once the Logical Server is in place.

  • Next step is to create a database within that server. 
$ databasename = "myBlogDatabase"
New-AzureRmSqlDatabase   -ResourceGroupName $ resource_groupname -ServerName $ serverName -DatabaseName $ database name

Let’s navigate to the Azure portal and check the database we created “myBlogDatabase”.  It uses a standard pricing tier.

  • DTU: 10 (S0) “Database Transaction Units”
  • Storage: 250 GB

Now when you try to log in to Cloud SQL Server from your own premises – SQL server management studio. You will receive an error message as below.

firewallAccess

Navigate to the Azure portal and configure the firewall rules to allow access to your on-premises server. 

Navigate to your database created in cloud -> Click Firewall setting -> specify the Rule name and Start and End IP. Once the rules are in place, you can now successfully login to Azure SQL Server from on-premises SQL Management Studio. 

SqlAccess

Resilient Systems with Polly.NET

General

Probably most of the Web sites and Applications we use or visit today are of a distributed nature and running highly complex infrastructure and using sophisticated Software Design patterns for the Cloud along with factors that can lead to difficulties and failure. We can look out there for companies like Amazon, Netflix, Hulu, Facebook, to mention some, where they have invested Infrastructure and Software to deal with failure. We should not be strangers to the fact Systems will fail, that we should embrace and always be prepared for it.

Distributed Systems

Some Systems eventually face the situation where they need to be able to handle larger workloads with virtually no impact on performance, at least nothing “noticeable” by users and us, this leads engineers to re-think approaches and architect systems that can properly scale. Distributed Systems are strategically designed and built with such scenarios in mind where engineers connect different components together, establish a protocol, workflows and policies for efficient and reliable communication. The tradeoff however, is that this inevitably introduces more complexity as well as new risks and challenges.

Resiliency in Distributed Systems

This has to do how a System copes with difficulties, particularly that such System should be able to deal with failure and ideally recover from it, there are different events that introduce the risk of failure such as:
  • General network connectivity intermittency or temporary failure, i.e. Outages.
  • Spikes in load.
  • Latency.
Also, there are quite a few scenarios we can encounter where dependency between systems exists, to mention some:
  • Basic network dependencies like Database Servers.
  • Systems integration and distributed processes.
  • SOA based Applications
At the end of the day, it is about embracing failure, being mindful of it when designing and developing solutions, paying close attention to requirements, workflows and key aspects like critical paths, integration points, handling cascading failures and system back pressure.

Introducing Polly.NET

This is a library that enables you to write fault-tolerant, resilient .NET based Applications by means of applying well-known techniques and software design patterns through a API, to mention some of the features of the library:
  • Lightweight.
  • Zero dependency, it is only Polly.NET Packages.
  • Fluent API, supports both synchronous and asynchronous operations.
  • Targets both .NET Framework and .NET Core runtimes.
They way the library works is by allowing developers express robust and mature software design patterns through Policies. The developer will usually go through the following steps when using it:
  1. Identify what problem to handle, this will likely translate to an Exception in .NET.
  2. Policy or Policies to use.
  3. Execute the Policy or Policy group.

Policies

To demonstrate some of the Policies offered by Polly.NET we are going to assume we need to reach a Web API endpoint to retrieve a list of Users. These are some of the Policies that the library offers out of the box, as mentioned before we can use the Policies individually or combined. The first one we will look at is Retry which allows to specify the number of attempts to conduct should an action fail the first time, this could interpreted as the Systems just having a short-lived issue so that we are willing to give it another chance, we can see an example below:
// Simple Retry Policy, will make three attempts before giving up.
var retryPolicy = Policy
                    .Handle()
                    .WaitAndRetryAsync(3);
HttpResponseMessage response = await retryPolicy.ExecuteAsync(() => httpClient.GetAsync(uriToCheck));
The approach presented above is pretty straightforward, a failure is considered to happen when an exception of type HttpRequestException is raised, if this happens it will just retry as many times as specified in WaitAndRetryAsync, in this case three, it will give up if the call does not succeed, in other words, if the exception continued to be raised which in such case it will bubble up.
There are various versions of Retry for instance we can tell to just retry Forever although that might not be the best choice in most situations, however, there is one version that allows to do Exponential Back off which is Retry and Wait.
It behaves like the Retry policy, with the addition of being able to space out each call with a given duration, take the following example:
// Retry with a 'Wait' period which is given as TimeSpan in thte second argument to WaitAndRetryAsync
var retryPolicy = Policy
                   .Handle()
                   .WaitAndRetryAsync(3, (r) => TimeSpan.FromSeconds(r * 1.5f));

HttpResponseMessage response = await retryPolicy.ExecuteAsync(() => httpClient.GetAsync(uriToCheck));

Now, in this version, we can see an extra lambda function returning a TimeSpan which represents the time we want to wait before each subsequent attempt, if we dissect that line we can find the following:

  • The argument r passed to the lambda represents the attempt number, in this case it will have three possible values: 1,2 and 3.
  • We multiply the attempt number by a float constant greater than 1, this gives the effect of an increasing delay on each attempt.

Applying a Strategy

As mentioned before, we can apply several policies as a group, for instance, Retry with Fallback, this is achieved by using Policy.Wrap, we can construct strategies by properly combining different policies that can be applied and re-used, this obviously goes on a case-by-case basis, what is certain is that there are situations where just a single policy may not suffice.
The following example demonstrates the combination of Retry and Wait with Fallback, the latter allows to degrade gracefully  upon failure.
// Exponential back-off behavior
var retryPolicy = Policy
                    .Handle()
                    .WaitAndRetryAsync(3, (r) => TimeSpan.FromSeconds(r * 1.5f));

// Degrade gracefully by returning bad gateway
var fallbackPolicy = Policy
                       .Handle()
                       .FallbackAsync((t) => Task.FromResult(HttpStatusCode.BadGateway));

/**
* Combine them, first argument is outer-most, last is inner-most.
* Policies are evaluated from inner-most to outer-most, in this case:
* retry then fallback
*/
var policyStrategy = Policy.WrapAsync(fallbackPolicy, retryPolicy);
HttpStatusCode resultCode = await policyStrategy.ExecuteAsync(async () => (await httpClient.GetAsync(uriToCheck)).StatusCode);

We are dealing with status codes here for simplicity of the example, but a better approach for the fallback action would be to follow a Null Object Design Pattern and produce an empty list of Users, this really depends on the requirements and structure of our Application. The patterns we have introduced in this article are only a few I have picked to introduce you to how the library works and how such patterns can be applied. Polly exposes even more Policies to enforce a broad set of well-known software design patterns for resiliency such as:

  • Circuit Breaker.
  • Advanced Circuit Breaker.
  • Bulkhead Isolation.
  • Fallback.
  • Cache (which gives us the cache-aside / read-through pattern)
It is up to us to identify and understand the interactions between the different components of an Application, possible fragile points where we could potentially face failure and define a solid strategy on how to deal with such situations.

References

 

Using the right tool for the right job

For the most part, our lives aren’t that particular complicated. Our decisions don’t always have the impact of a meteorite in our lives or others. But some decisions do tend to follow us, nay, haunt us for a longer period of time than we originally anticipated – namely the choice we made when settling on using a specific tool.

Some years back my wife asked me to build her a veggie garden out back, with a picket fence, gate, paved path, a simple shed and sandstone borders. It was a simple setup until I got to the stage where I needed to build the gate.

Should in passing here mention that my handyman skills are pretty much trial and error or whatever tips and tricks I can locate on the web (read: YouTube, Vimeo et al – I need pictures too for this sort of work).

Rather than simply buying a finished gate, I had to go manly and build a custom one. After all, my skills had recently grown to include the level of a master carpenter, no? The picket fence, veggie garden et al was nearly complete.

After struggling for 8hrs more than was necessary, I finally had the gate finished and needed to mount it. Of course, it didn’t fit so I had to dismember the gate again. This went on for a little while till I finally got it right.

The lesson I took away from this was, at the time (of course), that carpentry wasn’t my main forte so I could be excused for taking a little longer, producing a gate that wasn’t fitted professionally and a small gap existed between the fence and sandstone bricks for the borders.

Looking back, the first mistake was obviously the choice of tool to use – namely myself.

Facing choices in business

Moving along, to an industry where such choices pop up on a near daily basis, the ramifications of making the incorrect choice obviously has a steep impact on profitability and business continuity.

The IT&T industry is ripe with these pitfalls, both from the choice of service provider, the hiring process and down to which framework to adopt for a given project. If you ask a practitioner for advice on what product, client tools or enterprise framework to use, self-interest will always kick in and the answer is what preserves the current status of the practitioner or which will enhance it.

The same would be said for having so-called independent consultants provide you with a recommendation. Said consultancy would naturally focus on an answer which would provide them with the highest percentage chance of continuing the engagement – would that make the answers and recommendations wrong or inaccurate? Of course not, not by those virtues alone.

This is of course what makes it even harder. Why can’t you trust an independent recommendation?

Simply because there are a thousand ways to skin a cat. What is the correct choice today, will likely change in the near future as the business change.

Buying off the rack

Information Technology adoption decisions tend to be made, firstly, by the financial impacts of adoption. For a business, which doesn’t deal with IT but is largely dependent on it, the bottom line will generally be cost and the equation simple.

x => y

Where x = budget and y = cost.

Of course, the relevant departments will have done their due diligence with regards to product choices or service provider. They’d have had a string of sales and technical presales consultants strut their stuff, showing off their product or services in the best light possible.

People far smarter than I have been telling us about this for years already. So why is it, that IT projects have one of the largest failure percentages in the world? Why is it, that budgets are only ever guides, not true costs of adoption? Surely we should have learned enough by now to know how to do this right.

It’s not always simple to make the right choice. Even for subject matter experts who make a living working in a space littered with off the shelf solutions, talented developers, and magicians.

When working through the requirements, we all know that it’s important to engage in depth with stakeholders and business users. Buy-in from the business is mandatory for adoption or project success. Without it, you simply won’t ever be able to complete the project with a positive outcome.

Words of wisdom that has been known for decades to be immutable facts.

Taking on a custom solution

However, for project success, it’s not just about delivering on time and under budget and there are many more factors that have to be weighed before the stamp of approval is given.

This is especially true when the choice comes in to create a custom solution – a solution which suits your business as a deerskin glove…smooth and comfortable.

One of the aspects, which helps define the cost of adoption for software projects, is calculating technical debt.

Ted Theodoropoulos wrote an excellent article, back in 2010, where he goes into the primary points needed in order to identify and calculate technical debt.

I can highly recommend reading his 4 part series on Technical Debt (http://blog.acrowire.com/technical-debt/technical-debt-part-1-definition).

Accurately predicting the future

Obviously, it’s not feasible to accurately predict the future. I’m sure the world would be a very different place right now if that was not the case.

We spend a lot of effort and money on analyzing the operational paths, which could potentially, with some degree of probability come true…maybe… Business Intelligence analysis can show you predictive trend analysis outcomes, Technical Subject Matter Experts could tell you what direction the market is moving in and your staff could give you their opinion based on past experiences.

When the chips are down, the fact is…we just don’t know.

So since we can’t predict the future and haven’t quite mastered time travel yet, what can be done about it?

The most important aspect of any decision is that it cannot be definite. We know the future can change week by week, so why are we so set on ensuring that our decisions are definite?

How often have we seen the following play out after 2 years of product/project implementation?

Director: “Why did you choose x Product/Platform as the solution for our ERP. It has now cost us $X million so far, with no end in sight?”

IT Manager: “it was within our originally estimated budget and covered about 80% of our immediate business requirements 2 years ago, but that was before we expanded our service offering and went international. We didn’t know these changes would happen when the purchasing decision was made.

The predictive nature of business risk calculation always looks to past trends, and if found incorrect the crystal ball is hidden away and we’re stuck with the same moniker that plagues our industry – the 20-20 hindsight.

One size fits all

Off the shelf products (or shrink wrap products) rarely fits all industries, every business within and we generally pride ourselves on “doing it different than all the rest”. So why should a product ever be expected to fit your company processes perfectly? Just because we told the sales rep what we wanted and was told that it could cover all of them?

No, that’s just not a realistic answer or solution. When implementing an existing product, the business has to expect some level of change being needed internally. It might vary all dependent on the size of the implementation – the more areas it impacts on, the higher likelihood there is for it needs to change.

Choosing solutions that end up with more external work being needed, than what it covers, is obviously not a good investment.

An agile business needs agile decisions

The most important aspect of business today is that it needs to move with the needs of its customers and the market it operates in.

This expectation of agility needs to be applied to all facets of the business. Especially when it comes to IT investments and adoption. Being agile in the ways that your information is consumed, orders being processed and tasks completed comes down to how the systems are designed, from the bottom up.

Using cloud services for key integration points could be vital for your IT investments ability to support an expanding business.

Being able to facilitate information sharing across international locations could be essential for business growth, IP discovery, and collaboration. The architecture around systems needs to be built with flexibility in mind – the same goes for the platforms and tools used to build individual components.

That’s why it’s important to use the right tool for the right job…even down to the level where lines of code are written and what framework is being used.

byBrick Development logo

At byBrick Development we are skilled in ascertaining best fit, by applying a methodology to the decision which adheres to the architectural principles you have established;

If your company doesn’t have principles in place then we can certainly also assist in developing those.

byBrick Development – the BLOG

Why publish a blog?

This is probably a question a lot of non-techie companies would ask and this post is here to explain why we, at byBrick Development, decided to publish a blog.

First of all, we have a group of extremely creative and highly competent technical practitioners and what better way for us to share some of this knowledge than to write a blog about it.

Our core competencies are focused on Office 365, Integration and bespoke .Net solutions where many of our consultants have in excess of 10, 15 or even 20 years experience in the industry.

We are a highly diverse group of consultants, with people from all over the world, which
bBD Officesis yet another particularity about byBrick Development that sets us apart from many other consultancies.

Another particularity is that many of our consultants are actively engaged in knowledge sharing and believe strongly in it. So much so that it has been on their wishlist for quite some time that we set up a blog.

So what can be expected from this blog?

Seeing as there are a huge amount of experience amongst our consultants, it’s really very hard to specify just a single (or even a handful) of topics that will appear here. We promise that we will ensure that we have a great deal of spread across all of our technical and business knowledge.

But who are we?

cropped-bybrick-development.png

The industry’s best IT Consultants…

We have chosen to focus on a small number of technologies and approaches in order to offer the best IT consultants in the areas where we operate. Our consultants are characterised by a unique ability to combine deep technical expertise with an understanding of various business needs and challenges.

IT should be as simple and cost-effective as possible. We work with proven platforms and standards so avoid locking our customers in systems and solutions that only we, or a few other providers can maintain and develop further. Our development methods provides full transparency with our customers, from costing to project management and status.

Our customers choose us because we are the best at what we do!