The BLOG

Azure Event Grid

Azure Event Grid is a platform as a service offering (PaaS) which is an event routing mechanism that helps you to subscribe to events.

There can be different modes of communication where message can be transmitted from one party to other.

  • One-way
  • Bi-Directional
  • Push and Pull mechanism etc

Azure offers various messaging service like –

  • Notification Hubs – where you can Push mobile notifications
  • Logic Apps – which helps you to Schedule, build and automate processes (Workflows)
  • Azure Service Bus – Exchanging information between 2 parties.
    • Topics, Queues and Relays
  • Azure Event Hub – Also part of service bus namespace. One-way event processing system that can take millions of events per second and can be used for analysis and storage
  • IoT Hub
  • Azure Event Grid – Based on publish\subscribe model. Easy to relate if you have been working with Microsoft BizTalk Server.

Now you may wonder which once to use from Azure Event Grid, Azure Event Hubs and Azure Service Bus?  – In case where you need to process order or do some financial transactions you must go with Azure Service Bus. While on the other hand when you need to stream large volume of data like telemetry data, event logs i.e. Messaging at scale you must consider using Azure Event Hub. Azure Event Grid can be used when you want to react on an event.

Azure Event Grid is based on Publish/Subscribe model when there can be a single publisher but multiple subscriber that subscribes to those events.

event Grid .jpg

Event Grid Flow

Event occurs within a Publisher and it pushes those events to Event Grid Topic. Subscribers are listening to that topic. Event Handlers are the one responsible for handling those events.

In the below Example we have created an Event Subscription which is subscribing to Event Grid Topics (Publisher).

Topic Type can be any from the following –

  • Event Hubs Namespace
  • Storage Accounts
  • Azure Subscriptions
  • Resource Groups
  • Event Grid Topics (Used for Demo)

EventGrid2.JPG

We have used Postman which is doing a POST request to Azure Event Grid Topic endpoint. This topic will be subscribed by an Event Subscription and the messages will be sent to subscriber Endpoint i.e Request Bin in this case.

POSTMAN

AzureEG_IN

REQUEST BIN 

AzureEG_OUT

Azure Event Grid ensures reliability and performance for your apps. You can manage all your events in once place and lastly you just need to pay per event.

 

 

Benchmarking Applications with BenchmarkDotNet – Introduction

TL;DR

BenchmarkDotNet is a Library that enable Developers to define Performance Tests for Applications, it abstracts the complexity from the Developer and allows a degree of extensibility and customisation through its API. Developers can get started pretty quick and refer to is documentation for Advanced Features.

This is not a post to actually perform benchmarking but rather introduce BenchmarkDotNet to Developers. Sources used are available in Github.

 

Introduction

Typically, when we develop a piece of software some degree of testing and measuring is warranted, the level of it depends on the complexity of what we are developing, desired coverage, etc. Unit or Integration Tests are by default part of any project most of the time and in the mind if any software developer, however, when it comes with benchmarking things tend to be a little different.

Now, as a very personal opinion, benchmarking and testing is not the same, the goal of testing is functionality, comparing expected vs actual results, this being a very particular aspects of Unit Tests, we also have Integration Tests which present different characteristics and goals.

Benchmarking on the other hand is about measuring execution, we will likely establish a Baseline that we can compare against, at least once. there are things we’d be interested in like execution time, memory allocation, among other performance counters. Accomplishing this can be challenging, but most importantly, to do it right. so here is where BenchmarkDotNet comes into the play, like with other aspects and problems of when developing software, it is a library that abstracts various aspects from the Developer to define, maintain and leverage Performance Tests.

Some facts about BenchmarkDotNet

  • Part of the .NET Foundation.
  • Runtimes supported .NET Framework (4.6.x+), .NET Core (1.1+), Mono.
  • OS supported: Windows, macOS, Linux

 

Benchmarking our Code

While it is unlikely that every piece of code in the system will need to be Benchmarked, there might be scenarios where we are able to identify modules that are part of the a critical path in our System and given their nature they might subject to benchmarking, what to measure and how intensive would vary on a case-by-case basis.

Whether we have expectations, we need to ensure that critical modules in a System are performing at it most optimal point if possible or at least acceptable so that we can establish a baseline that we can used to compare against as we do gradual and incremental improvements. We need to define a measurable, quantifiable truth for performance indicators.

Usage of a Library

Reasons we have many and are applicable to practically any library, favour reusability, avoid reinventing the wheel but most importantly is about relying on some that is heavily tested and proven to work, in this particular case we care about how accurate the results are, and that depends on the approach, we have all seen examples out there where modules like Stopwatch is used and while it is not entirely bad, it is unlikely it will ever provide the same accuracy BenchmarkDotNet provides nor the flexibility or extensibility, to mention some features BenchmarkDotNet provides:

  • Allows the Developer to target multiple runtimes through Jobs, for instance various version of the .NET Framework and .NET Core, this is instrumental to prevent extrapolating results.
  • Generation of reports in various formats that can be analysed.
  • Provides execution isolation to ensure running conditions are optimal.
  • Takes care of aspects like performing several iterations of execution and warm-up.

More information about the actual flow can be found in the How it works section in the Documentation.

Read More

Event sourcing as an evolutionary architectural pattern

In software, the only thing that is constant is change. And software architecture is no different, it has to evolve. Simply put, system components should be organized in such a way that they support constant change without any downtime, Even if all the components are changed or completely replaced, the show must go on.

Software architecture has to be technology agnostic, Resilient, and designed for the incremental change.

pexels-photo-194094

If we think about the Theseus paradox, Evolutionary architectures care less about the sameness of the ship(s) and focus more whether the ship would keep on sailing. In short, architecture has to be technology agnostic, resilient and designed for the incremental change.

We can call any system “evolutionary” if it is designed for incremental change and can evolve with the changing nature of the business. Microservices style architecture is considered evolutionary because each service is decoupled from all other services in a system and communication between services is completely technology agnostic, therefore replacing one microservice with another is rather easy.

Microservices architecture focuses more on the technology neutral communication between the components, loose coupling, versioning, automated deployment etc. but it doesn’t say anything about how each service or a complete system should store data internally.

Event sourcing (ES) pattern can serve as the missing piece in the puzzle, by defining data storage strategy.

Butchering the data is a criminal offense

In many businesses, modifying or changing the data is a crime, for example, Banking, forensic, medical, law etc. Remember Accountants don’t use erasers. Whether we like it or not, for certain businesses like Telecom and ISP retaining data for a period of time is compulsory.

Can you imagine CCTV systems without recording? what will happen if something happens and you have no record of it?

DrMikeAamodt.jpg

Most serial killers kill for enjoyment or financial gains.

In the early 1990s, Dr. Mike Aamodt started a research about serial killers behaviors. It took him and his students decades to study and analyze public records before they could answer why and how serial killer kills? this wouldn’t have happened without historical records. In order to answer future questions, we must persist everything that is happening today.

Don’t create serial killers of the data.

We all know that the currency of future is data. We don’t know what tools and methods we will have at our disposal in a decade, we certainly can’t predict how important that data could be for our businesses or for us personally, Still, we design and develop applications that are constantly murdering valuable data.

CQRS/ES

We as a species accumulate memories from our experiences, We simply can’t think like a CRUD system. There is a reason we love stories and pretty much everyone hates spoilers, Our brains work like an ES system.

Event sourcing is a way of persisting domain events (the changes that have occurred over time) in your application, that series of events determine the current state of the application. ES and CQRS go hand in hand, We simply can’t talk about ES without mentioning the CQRS (Command and query responsibility segregation). I generally explain CQRS with a notion of one-way and two-way streets or separate paths for pedestrians, cycling, and other traffic.

pexels-photo-210182

let’s see how Greg Young himself explains it.

CQRS is simply the creation of two objects where there was previously only one. The separation occurs based upon whether the methods are a command or a query (the same definition that is used by Meyer in Command and Query Separation: a command is any method that mutates state and a query is any method that returns a value).
—Greg Young, CQRS, Task Based UIs, Event Sourcing agh!

The advantages of using CQRS/ES

  • Complete log of changes
  • Time traveling (replaying events is like storytelling)
  • Easy anomaly and fraud detection (by log comparison)
  • Traceability
  • Death of cannot reproduce or Easy debugging (production data can be debugged in dev environment)
  • Read (with CQRS) and Write performance
  • Scalability (with CQRS)

Common use-cases

Since Event sourcing enables you to audit state changes in your system. Auditors will love you for this. Historical records can be used to perform analysis, and identify interesting patterns. We must consider ES when accountability is critical (e.g Banking, medicine, law).

If your system by nature is Event-driven then using ES is a natural choice (e.g game score tracking, online gaming, stock monitoring, real-estate, vehicle tracking, social media like twitter, facebook, LinkedIn etc).

You should consider this pattern If you need to maintain versions or historical data for example document handling, Wikis etc.

Similarly, after each change, we can replay all the events and inspect the logs to see if expected change is there? and does it work?

Conclusion

Most people think embracing ES is complex, however, I believe in the opposite. I think ES systems are easy to scale and change. All the mature industries like Banking, Finance, Law, and Medicine already use these methods, even before computers were invented.

We simply can’t destroy the data, because we can’t predict what tools will be available in a year from now, and how useful this data would be in future.

In a forthcoming blog, we will show you a production-quality implementation of a system that has the potential to evolve.

If you happen to be in Stockholm and have an irresistible urge to discuss evolutionary architectures such as CQRS/ES lets have a “Fika”.

Sharing is caring

At byBrick Development the culture is focused around the sharing of knowledge.

pkn-sajid

To that effect we run a program called byKnowledge and as part of byKnowledge we have a special session type called Pecha Kucha.

Pecha Kucha Format

The format is very simple and invites 4-5 different presentations from individual members of byBrick Development.

  • Session length is 7 mins in duration
  • Topic can be anything, including non-technical

pkn-hassanThe idea is to showcase and share knowledge in a very succint manner. It is very effective and also allow us to get to know out colleagues a little better.

The byKnowledge, and by extension Pecha Kucha events run monthly for us and we continue to improve on the format and style.

The topics from last night

We had a great range of topics last night, the focus here was more technical in nature, but previously we have even had “Stress Management” on the list.

pkn crew bybrick development

Part of the byBrick Development crew

Benefits of sharing knowledge

We work in an industry where it is nigh on impossible to be aware of every single aspect.
pkn-osman

In order to gain an incremental curve of learning and knowledge, we work through our own experiences and share it with our colleagues.

Aside from attending conferences, meetups and training we gain exponentially by collectively sharing the information we gain.

As the opening stated, it is heavily embedded in our culture and it’s a testament to the people we work with that we get the opportunity to learn from their real life experiences.

CFO asks his CEO: What happens if we invest in developing our people and then they leave the company?

CEO answers: What happens if we don’t, and they stay?

I know!! it’s a meme that’s done the rounds for years now – but honestly, it’s a paradigm which is incredibly accurate in its message.

Cultural Aspects

pkn-touhisaariCreating a culture where focus is on giving our people the ability to change and adapt. It’s not something which is all that common but it’s a strong focus for us.

Luckily our consultants have embraced this and we continued to adopt and embrace new ideas – it is the core of what byBrick is all about.

 

 

BizTalk Management and MessageBox Databases Sync Issue – Part2- UN-Deployment

Second phase of BizTalk Management and MessageBox DB sync issue is Un-Deploying/Deleting application. Fortunately issue was with only 2 applications.

When tried to Delete (either from Cosnole Or BTSTask -Remove App command ) an application in order to deploy a new version,we got an error popping out every-time saying

The service action could not be performed because the service is not registered

The application was visible in the BizTalk Admin Console.

Funny part was; application was running without any issue and all messages subscribed by artifacts of this application were being processed successfully.

This helped me in buying some time to dig out the issue.

From previous experience it was evident that something was missing in the BizTalk Databases and most likely in the BizTalkMsgBoxDb and analyzing further with help of SQL-Profiler and BizTalk Trace we were able to find that Port information was not present in Service table and Application name as not registered in Module table.

Resolution

Take backup of BizTalk Databases (we can rely on SQL jobs for BizTalk DBs 🙂 )

Stop all hosts instances.

Check application artifacts in following tables.   bts_application, bts_receiveport, and bts_sendport and bts_assembly tables in the BizTalkMgmtDb. (If you have other artifact in your application, then you need to check in respective table e.g. Send Port Group, Orchestration etc.

Insert Application Name in Module tables which will (auto)generated Module Id. Use this Id ‘nModuleID’ in service tablle

Insert into [BizTalkMsgBoxDb].[dbo].[Modules] ([nvcName],[dtTimeStamp])
values ('Application.MyApplication', GETDATE())

Find Service Instance IDs (uidGUID) in bts_SendPorts, bts_ReceievPorts i.e. (all possible artifacts tables).

SELECT uidGUID FROM [BizTalkMgmtDb].[dbo].[bts_receiveport] WITH (NOLOCK)
WHERE nvcName =<'your artefact name'>

SELECT uidGUID FROM [BizTalkMgmtDb].[dbo].[bts_sendport] WITH (NOLOCK)
WHERE nvcName =<'your artefact name'>

Then insert Service Instance ID (uidGUID) in Services table with Module ID for all artifacts One by One

I had 2 Receive Ports and 2 Send Ports which led me to make 4 entries in Services table.

Get ModuleID from Modules table for respective application and use in below query

Insert INTO[BizTalkMsgBoxDb].[dbo].[Services]

([uidServiceID],uidServiceClassID,[nModuleID],[fAttributes])

VALUES ('<ServiceIntanceID from artifact table>',Null,<ModuleID>,0)

Finally

Un-Deploy Or Re-Deploy Happily.

 

BizTalk Management and MessageBox Databases Sync Issue – Part1- Deployment

While using BizTalk Clustered Environment with Availability Groups (Not Failover Cluster), an issue was found while deploying BizTalk Application.

BizTalk Management and MessageBox Databases goes out sync (due to some specific or unknown reason), we get error while deploying an application. (it could be 1 or more applications; in my case it was only one)

Deploying another version of same application in BizTalk Environment creates issue.

It is noticed that there are activation subscriptions left even after Un-deploying the BizTalk application. Its  impossible to deploy the BizTalk application again with Orphan Subscriptions.

Executing  AddApp command, it gives error

Error: Application registration failed because the application already exists

Executing RemoveApp command for same application , it gives error

Error: Application ” M         ” not found in configuration database.

AddAppRemoveApp

This proves there is contradiction in BizTalk Database instances.

Event Log produces generic log.

Unable to communicate with MessageBox BizTalkMsgBoxDb on SQL Instance ABCD11111\MessageBoxInst. Error Code: 0x8004d00e. Possible reasons include:
 1) The MessageBox is unavailable.
2) The network link from this machine to the MessageBox is down.
3) The DTC Configuration on either this local machine or the machine hosting this MessageBox is incorrect

Reason

  1. Possibly, Maximum Degree Of Parallelism for MessageBox Database (MDOP) was  not set to 1. It was grater than 1 during this period of time which have caused database issues when databases are Synched in AG.
  2. Orphan Subscriptions of application.
  3. Orphans of application.

Steps for Tracing the issue

  1. Executed command BTSTask ListApps
  2. Executed command BTSTask ListApp /ApplicationName:”M………”
  3. Check Test DTC settings.

No trace of application found.

Then the last option left is to dig in BizTalk Databases.

  1. Run SQL profiler  on MessageBox and Management databases while executing BTSTask commands and collect the trace log.
  2. Run BizTalk Trace using “PSSDiagForBizTalk” while executing BTSTask commands

Try to search for application name in both trace logs and identify stored procedure or process creating issue.

Resolution

This issue appears in MessageBox database due Orphans of Application. Execution of one stored proc i.e. bts_AdminAddModule got interrupted during deployment and application was not deleted from Modules and Services tables. Modules and Services tables in MessageBox are mainly responsible for subscriptions.

Note:- Do Not forget to stop Hosts while performing below activity.

  1. Delete Orphan Subscriptions using subscription IDs. (Using SQL Query or a tool you prefer)
  2. Run below SQL query and get nModuleID of application creating issue.            SELECT  *  FROM [BizTalkMsgBoxDb].[dbo].[Modules] WITH (NOLOCK)
  3. Check for Instance in Application and Instances tables for nModuleID. Query result must be empty.
  4. Run below queries                                                                                                            BEGIN TRANSACTIONDELETE FROM [BizTalkMsgBoxDb].[dbo].[Services] WHERE nModuleID = 111DELETE FROM [BizTalkMsgBoxDb].[dbo].[Modules] WHERE nModuleID = 111ROLLBACK TRANSACTION

 

 

 

Create SQL database on Azure using PowerShell and access from on Premises Microsoft SQL Server Management Studio

SQL Azure is cloud database-as-a-service provided by Microsoft cloud. The best part is that the data is hosted and managed by Microsoft data centers. We can build our applications on premises and move our data to cloud.

Azure SQL model provides:

  • Elastic database pools – Allows you to provisioned 1000’s of database table as you need. Grow and shrink based on your requirement.
  • Azure hosted database
  • Pay for only what you use (no of plans available)
  • Auto-scale
  • Also enable Geo-replication for disaster recovery
  • Reduces hardware cost and maintenance

In this example, we would be creating an SQL database on Azure Cloud and try to access SQL Server from on-premises SQL server management studio. We would be implementing this by using the PowerShell command and later configuring firewall rules that allow access from external parties.

Pre-requisites

Once we are ready with the PowerShell modules. Open Windows PowerShell ISE (x86) as an administrator.

  • Login to your Azure portal using the below command. 
Login-AzureRmAccount

AzureLogin

Apply necessary login details and click OK. After successful login,

  • Lets create a Resource Group.
$ location = "west tour" 
$ resourceGroup = "mysqlserverRG" 
New-AzureRmResourceGroup -ResourceGroupName $ resourceGroup -Location $ location
  • Let’s create a Logical SQL Server. We would be using “New-AzureRmSqlServer” command. A Logical SQL server contains a group of databases managed as a group.
$ servername = "myblogazuresqlserver" 
$ username = "mypersonallogin" 
$ password = "xxxxxxx"
New-AzureRmSqlServer -ResourceGroupName $resourcegroupname -ServerName $ serverName -SllAdministratorCredentials $ ( New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $ username , $ ( ConvertTo-SecureString -String $ password -AsPlainText -Force ))

Once the Logical Server is in place.

  • Next step is to create a database within that server. 
$ databasename = "myBlogDatabase"
New-AzureRmSqlDatabase   -ResourceGroupName $ resource_groupname -ServerName $ serverName -DatabaseName $ database name

Let’s navigate to the Azure portal and check the database we created “myBlogDatabase”.  It uses a standard pricing tier.

  • DTU: 10 (S0) “Database Transaction Units”
  • Storage: 250 GB

Now when you try to log in to Cloud SQL Server from your own premises – SQL server management studio. You will receive an error message as below.

firewallAccess

Navigate to the Azure portal and configure the firewall rules to allow access to your on-premises server. 

Navigate to your database created in cloud -> Click Firewall setting -> specify the Rule name and Start and End IP. Once the rules are in place, you can now successfully login to Azure SQL Server from on-premises SQL Management Studio. 

SqlAccess

Resilient Systems with Polly.NET

General

Probably most of the Web sites and Applications we use or visit today are of a distributed nature and running highly complex infrastructure and using sophisticated Software Design patterns for the Cloud along with factors that can lead to difficulties and failure. We can look out there for companies like Amazon, Netflix, Hulu, Facebook, to mention some, where they have invested Infrastructure and Software to deal with failure. We should not be strangers to the fact Systems will fail, that we should embrace and always be prepared for it.

Distributed Systems

Some Systems eventually face the situation where they need to be able to handle larger workloads with virtually no impact on performance, at least nothing “noticeable” by users and us, this leads engineers to re-think approaches and architect systems that can properly scale. Distributed Systems are strategically designed and built with such scenarios in mind where engineers connect different components together, establish a protocol, workflows and policies for efficient and reliable communication. The tradeoff however, is that this inevitably introduces more complexity as well as new risks and challenges.

Resiliency in Distributed Systems

This has to do how a System copes with difficulties, particularly that such System should be able to deal with failure and ideally recover from it, there are different events that introduce the risk of failure such as:
  • General network connectivity intermittency or temporary failure, i.e. Outages.
  • Spikes in load.
  • Latency.
Also, there are quite a few scenarios we can encounter where dependency between systems exists, to mention some:
  • Basic network dependencies like Database Servers.
  • Systems integration and distributed processes.
  • SOA based Applications
At the end of the day, it is about embracing failure, being mindful of it when designing and developing solutions, paying close attention to requirements, workflows and key aspects like critical paths, integration points, handling cascading failures and system back pressure.

Introducing Polly.NET

This is a library that enables you to write fault-tolerant, resilient .NET based Applications by means of applying well-known techniques and software design patterns through a API, to mention some of the features of the library:
  • Lightweight.
  • Zero dependency, it is only Polly.NET Packages.
  • Fluent API, supports both synchronous and asynchronous operations.
  • Targets both .NET Framework and .NET Core runtimes.
They way the library works is by allowing developers express robust and mature software design patterns through Policies. The developer will usually go through the following steps when using it:
  1. Identify what problem to handle, this will likely translate to an Exception in .NET.
  2. Policy or Policies to use.
  3. Execute the Policy or Policy group.

Policies

To demonstrate some of the Policies offered by Polly.NET we are going to assume we need to reach a Web API endpoint to retrieve a list of Users. These are some of the Policies that the library offers out of the box, as mentioned before we can use the Policies individually or combined. The first one we will look at is Retry which allows to specify the number of attempts to conduct should an action fail the first time, this could interpreted as the Systems just having a short-lived issue so that we are willing to give it another chance, we can see an example below:
// Simple Retry Policy, will make three attempts before giving up.
var retryPolicy = Policy
                    .Handle()
                    .WaitAndRetryAsync(3);
HttpResponseMessage response = await retryPolicy.ExecuteAsync(() => httpClient.GetAsync(uriToCheck));
The approach presented above is pretty straightforward, a failure is considered to happen when an exception of type HttpRequestException is raised, if this happens it will just retry as many times as specified in WaitAndRetryAsync, in this case three, it will give up if the call does not succeed, in other words, if the exception continued to be raised which in such case it will bubble up.
There are various versions of Retry for instance we can tell to just retry Forever although that might not be the best choice in most situations, however, there is one version that allows to do Exponential Back off which is Retry and Wait.
It behaves like the Retry policy, with the addition of being able to space out each call with a given duration, take the following example:
// Retry with a 'Wait' period which is given as TimeSpan in thte second argument to WaitAndRetryAsync
var retryPolicy = Policy
                   .Handle()
                   .WaitAndRetryAsync(3, (r) => TimeSpan.FromSeconds(r * 1.5f));

HttpResponseMessage response = await retryPolicy.ExecuteAsync(() => httpClient.GetAsync(uriToCheck));

Now, in this version, we can see an extra lambda function returning a TimeSpan which represents the time we want to wait before each subsequent attempt, if we dissect that line we can find the following:

  • The argument r passed to the lambda represents the attempt number, in this case it will have three possible values: 1,2 and 3.
  • We multiply the attempt number by a float constant greater than 1, this gives the effect of an increasing delay on each attempt.

Applying a Strategy

As mentioned before, we can apply several policies as a group, for instance, Retry with Fallback, this is achieved by using Policy.Wrap, we can construct strategies by properly combining different policies that can be applied and re-used, this obviously goes on a case-by-case basis, what is certain is that there are situations where just a single policy may not suffice.
The following example demonstrates the combination of Retry and Wait with Fallback, the latter allows to degrade gracefully  upon failure.
// Exponential back-off behavior
var retryPolicy = Policy
                    .Handle()
                    .WaitAndRetryAsync(3, (r) => TimeSpan.FromSeconds(r * 1.5f));

// Degrade gracefully by returning bad gateway
var fallbackPolicy = Policy
                       .Handle()
                       .FallbackAsync((t) => Task.FromResult(HttpStatusCode.BadGateway));

/**
* Combine them, first argument is outer-most, last is inner-most.
* Policies are evaluated from inner-most to outer-most, in this case:
* retry then fallback
*/
var policyStrategy = Policy.WrapAsync(fallbackPolicy, retryPolicy);
HttpStatusCode resultCode = await policyStrategy.ExecuteAsync(async () => (await httpClient.GetAsync(uriToCheck)).StatusCode);

We are dealing with status codes here for simplicity of the example, but a better approach for the fallback action would be to follow a Null Object Design Pattern and produce an empty list of Users, this really depends on the requirements and structure of our Application. The patterns we have introduced in this article are only a few I have picked to introduce you to how the library works and how such patterns can be applied. Polly exposes even more Policies to enforce a broad set of well-known software design patterns for resiliency such as:

  • Circuit Breaker.
  • Advanced Circuit Breaker.
  • Bulkhead Isolation.
  • Fallback.
  • Cache (which gives us the cache-aside / read-through pattern)
It is up to us to identify and understand the interactions between the different components of an Application, possible fragile points where we could potentially face failure and define a solid strategy on how to deal with such situations.

References

 

Using the right tool for the right job

For the most part, our lives aren’t that particular complicated. Our decisions don’t always have the impact of a meteorite in our lives or others. But some decisions do tend to follow us, nay, haunt us for a longer period of time than we originally anticipated – namely the choice we made when settling on using a specific tool.

Some years back my wife asked me to build her a veggie garden out back, with a picket fence, gate, paved path, a simple shed and sandstone borders. It was a simple setup until I got to the stage where I needed to build the gate.

Should in passing here mention that my handyman skills are pretty much trial and error or whatever tips and tricks I can locate on the web (read: YouTube, Vimeo et al – I need pictures too for this sort of work).

Rather than simply buying a finished gate, I had to go manly and build a custom one. After all, my skills had recently grown to include the level of a master carpenter, no? The picket fence, veggie garden et al was nearly complete.

After struggling for 8hrs more than was necessary, I finally had the gate finished and needed to mount it. Of course, it didn’t fit so I had to dismember the gate again. This went on for a little while till I finally got it right.

The lesson I took away from this was, at the time (of course), that carpentry wasn’t my main forte so I could be excused for taking a little longer, producing a gate that wasn’t fitted professionally and a small gap existed between the fence and sandstone bricks for the borders.

Looking back, the first mistake was obviously the choice of tool to use – namely myself.

Facing choices in business

Moving along, to an industry where such choices pop up on a near daily basis, the ramifications of making the incorrect choice obviously has a steep impact on profitability and business continuity.

The IT&T industry is ripe with these pitfalls, both from the choice of service provider, the hiring process and down to which framework to adopt for a given project. If you ask a practitioner for advice on what product, client tools or enterprise framework to use, self-interest will always kick in and the answer is what preserves the current status of the practitioner or which will enhance it.

The same would be said for having so-called independent consultants provide you with a recommendation. Said consultancy would naturally focus on an answer which would provide them with the highest percentage chance of continuing the engagement – would that make the answers and recommendations wrong or inaccurate? Of course not, not by those virtues alone.

This is of course what makes it even harder. Why can’t you trust an independent recommendation?

Simply because there are a thousand ways to skin a cat. What is the correct choice today, will likely change in the near future as the business change.

Buying off the rack

Information Technology adoption decisions tend to be made, firstly, by the financial impacts of adoption. For a business, which doesn’t deal with IT but is largely dependent on it, the bottom line will generally be cost and the equation simple.

x => y

Where x = budget and y = cost.

Of course, the relevant departments will have done their due diligence with regards to product choices or service provider. They’d have had a string of sales and technical presales consultants strut their stuff, showing off their product or services in the best light possible.

People far smarter than I have been telling us about this for years already. So why is it, that IT projects have one of the largest failure percentages in the world? Why is it, that budgets are only ever guides, not true costs of adoption? Surely we should have learned enough by now to know how to do this right.

It’s not always simple to make the right choice. Even for subject matter experts who make a living working in a space littered with off the shelf solutions, talented developers, and magicians.

When working through the requirements, we all know that it’s important to engage in depth with stakeholders and business users. Buy-in from the business is mandatory for adoption or project success. Without it, you simply won’t ever be able to complete the project with a positive outcome.

Words of wisdom that has been known for decades to be immutable facts.

Taking on a custom solution

However, for project success, it’s not just about delivering on time and under budget and there are many more factors that have to be weighed before the stamp of approval is given.

This is especially true when the choice comes in to create a custom solution – a solution which suits your business as a deerskin glove…smooth and comfortable.

One of the aspects, which helps define the cost of adoption for software projects, is calculating technical debt.

Ted Theodoropoulos wrote an excellent article, back in 2010, where he goes into the primary points needed in order to identify and calculate technical debt.

I can highly recommend reading his 4 part series on Technical Debt (http://blog.acrowire.com/technical-debt/technical-debt-part-1-definition).

Accurately predicting the future

Obviously, it’s not feasible to accurately predict the future. I’m sure the world would be a very different place right now if that was not the case.

We spend a lot of effort and money on analyzing the operational paths, which could potentially, with some degree of probability come true…maybe… Business Intelligence analysis can show you predictive trend analysis outcomes, Technical Subject Matter Experts could tell you what direction the market is moving in and your staff could give you their opinion based on past experiences.

When the chips are down, the fact is…we just don’t know.

So since we can’t predict the future and haven’t quite mastered time travel yet, what can be done about it?

The most important aspect of any decision is that it cannot be definite. We know the future can change week by week, so why are we so set on ensuring that our decisions are definite?

How often have we seen the following play out after 2 years of product/project implementation?

Director: “Why did you choose x Product/Platform as the solution for our ERP. It has now cost us $X million so far, with no end in sight?”

IT Manager: “it was within our originally estimated budget and covered about 80% of our immediate business requirements 2 years ago, but that was before we expanded our service offering and went international. We didn’t know these changes would happen when the purchasing decision was made.

The predictive nature of business risk calculation always looks to past trends, and if found incorrect the crystal ball is hidden away and we’re stuck with the same moniker that plagues our industry – the 20-20 hindsight.

One size fits all

Off the shelf products (or shrink wrap products) rarely fits all industries, every business within and we generally pride ourselves on “doing it different than all the rest”. So why should a product ever be expected to fit your company processes perfectly? Just because we told the sales rep what we wanted and was told that it could cover all of them?

No, that’s just not a realistic answer or solution. When implementing an existing product, the business has to expect some level of change being needed internally. It might vary all dependent on the size of the implementation – the more areas it impacts on, the higher likelihood there is for it needs to change.

Choosing solutions that end up with more external work being needed, than what it covers, is obviously not a good investment.

An agile business needs agile decisions

The most important aspect of business today is that it needs to move with the needs of its customers and the market it operates in.

This expectation of agility needs to be applied to all facets of the business. Especially when it comes to IT investments and adoption. Being agile in the ways that your information is consumed, orders being processed and tasks completed comes down to how the systems are designed, from the bottom up.

Using cloud services for key integration points could be vital for your IT investments ability to support an expanding business.

Being able to facilitate information sharing across international locations could be essential for business growth, IP discovery, and collaboration. The architecture around systems needs to be built with flexibility in mind – the same goes for the platforms and tools used to build individual components.

That’s why it’s important to use the right tool for the right job…even down to the level where lines of code are written and what framework is being used.

byBrick Development logo

At byBrick Development we are skilled in ascertaining best fit, by applying a methodology to the decision which adheres to the architectural principles you have established;

If your company doesn’t have principles in place then we can certainly also assist in developing those.

byBrick Development – the BLOG

Why publish a blog?

This is probably a question a lot of non-techie companies would ask and this post is here to explain why we, at byBrick Development, decided to publish a blog.

First of all, we have a group of extremely creative and highly competent technical practitioners and what better way for us to share some of this knowledge than to write a blog about it.

Our core competencies are focused on Office 365, Integration and bespoke .Net solutions where many of our consultants have in excess of 10, 15 or even 20 years experience in the industry.

We are a highly diverse group of consultants, with people from all over the world, which
bBD Officesis yet another particularity about byBrick Development that sets us apart from many other consultancies.

Another particularity is that many of our consultants are actively engaged in knowledge sharing and believe strongly in it. So much so that it has been on their wishlist for quite some time that we set up a blog.

So what can be expected from this blog?

Seeing as there are a huge amount of experience amongst our consultants, it’s really very hard to specify just a single (or even a handful) of topics that will appear here. We promise that we will ensure that we have a great deal of spread across all of our technical and business knowledge.

But who are we?

cropped-bybrick-development.png

The industry’s best IT Consultants…

We have chosen to focus on a small number of technologies and approaches in order to offer the best IT consultants in the areas where we operate. Our consultants are characterised by a unique ability to combine deep technical expertise with an understanding of various business needs and challenges.

IT should be as simple and cost-effective as possible. We work with proven platforms and standards so avoid locking our customers in systems and solutions that only we, or a few other providers can maintain and develop further. Our development methods provides full transparency with our customers, from costing to project management and status.

Our customers choose us because we are the best at what we do!