Lies, damn lies and statistics

Entries for category "1. IT Architecture"

Systems of Record. Bollocks.

Having systems of record is a widely accepted architectural pattern used to provide clear delineation of responsibilities and guidance to end-to-end solution architectures.

It is an architectural pattern of it's time, suited to an IT landscape of large monolithic applications and was the dominant strategy since the 90s across almost all corporate IT. If this it your IT strategy, you have good governance and patterns for change and an integration architecture which is for keeping these monoliths in-sync then go forth and prosper - you will have mature IT and a reasonable cadence of change.

More likely however, is that significant data is created in what is considered systems of record, half migrated new systems of record plus micro-services and analytic / machine learning flows so the misalignment of systems of record to the business view has become more obvious and material.

There is not a good semantic fit between an applications view of data and the business view of data so lets stop pretending.

Stop Versioning!

The famous dig from Jamie Zawinski:

Some people, when confronted with a problem, think "I know, I'll use versioning." Now they have 2.1.0 problems.

My assertion is that integration versioning should be a last resort, used in specific situations. If you own both sides of the integration contract then do not version.

Versioning is in effect kicking the can down the road for your future self, or worse - leaving it for someone with no knowledge of the interface.

By following Postel's Law, using abstraction layers, micro-service boilerplate and not being afraid of regression testing (I.e. modern IT practices) we have simple to change components which we can track and upgrade.

Data Autonomy - Case Study

A couple of contracts back I was consulting as a solution architect at a national retail organisation I ran an experiment as a proof of the cadence which is possible using Data Autonomy.

Shortly after the project went live, I decided to build it again myself using Data Autonomy as realistically as possible. The result was a far better solution in a third of the time & cost.

Data Autonomy - BI & Analytics

BI & analytics loves Data Autonomy and event driven architecture. In the operational side of Semantic Hub / Data Mesh, data is already clean and in business form. Data engineering can subscribe to all significant business object changes and metrics can be automatically calculated live. Dimensional modelling also becomes a much simpler process as canonical data is available near real time.

The data engineering and integration teams can become an active part of data governance and data stewardship. Working closely with the business domain SMEs, everyone is on the same page and reporting is simplified.