
Macro trends in the tech industry | Nov 2018
By
Published: November 14, 2018
Twice a year we create the ÷ČÓ°Ö±²„ Technology Radar, an opinionated look at whatās happening in the enterprise tech world. We cover tools, techniques, languages, and platforms and we generally call out over one hundred individual āblipsā. Along with this detail we write about a handful of overarching āthemesā that can help a reader see the forest for the trees, and in this piece, I try to capture not just Radar themes, but wider trends across the tech industry today. These āmacro trendsā articles are only possible with the help from the large technology community at ÷ČÓ°Ö±²„, so Iād like to thank everyone who contributes ideas and has commented on drafts.
Ģż
The largest (non-classified) quantum computer available as of this writing is small: . There are a lot of headlines indicating the forthcoming demise of conventional cryptography, but 2048-bit RSA keys likely require a quantum computer of at least 6,000 qubits in size, and more modern algorithms such as AES probably have better security against quantum attacks. A commercial quantum computer is expected to need at least 100 qubits, as well as improved stability and error correction over what is available today. Practical uses for quantum computing are still in the realm of research exercises, for example, modeling the properties of complex molecules in chemistry. For now, at least, mainstream enterprise use of quantum computing seems a long way off.
My colleague George Earle and I have recently written a detailing the imperative to modernize as well as a plan for doing it.

Ģż
In the cloud space right now, we see more and more organizations successfully move to the public cloud, with more mature conversations and understanding around what this means. Bigger companiesāeven banks and other regulated industriesāare moving larger and more sensitive workloads to the cloud, and bringing their regulators along on the journey. In some cases, this means theyāre mandated to pursue a multi-cloud strategy for those material workloads. Many of the blips on todayās Radarāmulti-cloud sensibly, financial sympathy, and so onāare indicators that cloud is finally mainstream for all organizations and that the ālong tailā of migration is here.
Where things get tricky is if an organization simply assumes that their workload is appropriate for serverless techniques and carries on regardless, or doesnāt really do the math on whether itās better to pay for on-use functions than setting up and maintaining a dedicated server instance. Weād highlight two key areas where serverless needs to mature:
As a happy counterpoint to the problems with lingering antipatterns, in this Radar, we highlight that good practices are enduring in the industry. Whenever a new technology comes along, we all experiment with it, try to figure out which use cases are the best fit and the what the limits are of what it can and canāt do. A good example of this is the recent emphasis on data and machine learning. Once that new thing has been experimented with, and weāve learnt what itās good for and what it can do, we need to apply good engineering practices to it. In the machine learning case, weād recommend applying automated testing and continuous delivery practicesācombined we call this Continuous Intelligence. The point here is that all the practices weāve developed over the years to build software well continue to be applicable to all the new things. Doing a good job with the ācraftā of software creation continues to be important, no matter how the underlying tech changes.
Thatās it for this edition of Macro Trends. If youāve enjoyed this commentary, you might also like our recently re-launched podcasts series, where I am a host along with several of my ÷ČÓ°Ö±²„ colleagues. We release a podcast every two weeks covering topics such as agile data science, distributed systems, continuous intelligence and IoT. Check it out!
Quantum Computing is both here and not here
Weāre continuing to see traction in the quantum computing field. Academic institutions are partnering with commercial organizations, large investments are being made, and a community of startups and university spinouts is springing up. Microsoftās Q# language allows developers to get started with quantum computing and run algorithms against simulated machines, as well as tap into real cloud-based quantum computers. IBM Q is its competing offering, again partnering with large commercial organizations, academia, and startups. At a local level, weāve hosted quantum computing hack nights with extremely good community turnout.Ģż
But quantum still isnāt ready for prime-time.
The largest (non-classified) quantum computer available as of this writing is small: . There are a lot of headlines indicating the forthcoming demise of conventional cryptography, but 2048-bit RSA keys likely require a quantum computer of at least 6,000 qubits in size, and more modern algorithms such as AES probably have better security against quantum attacks. A commercial quantum computer is expected to need at least 100 qubits, as well as improved stability and error correction over what is available today. Practical uses for quantum computing are still in the realm of research exercises, for example, modeling the properties of complex molecules in chemistry. For now, at least, mainstream enterprise use of quantum computing seems a long way off.
Hyperkinetic pace of change
Weāve frequently observed that the pace of change in technology is not just fast: itās accelerating. When we started the Radar a decade ago, the default for entries was to remain for two Radar editions (approximately one year) with no movement before they fade away automatically. However, as indicated by the formula in one of our Radar themesāpace = distance over timeāchange in the software development ecosystem continues to accelerate. Time has remained constant (we still create the Radar twice a year), but the distance traveled in terms of technology innovation has noticeably increased, providing yet more evidence of whatās obvious to any astute observer: the pace of technology change continues to increase. We see an increased pace in all our Radar quadrants and also in our clientās appetite to adopt new and diverse technology choices. Given that almost everything in the world today across business, politics, and society is driven by technology, the pace of change in all these other areas increases as well. An important corollary for businesses is that there will be much less time available to adopt new technologies and business modelsāitās still āadapt or die,ā but the pressure is higher now than ever before.For companies to compete, continuous modernization is required
The need to upgrade and replace older technology isnāt newāfor as long as computers have been around a new model was in planning or just around the cornerābut it does feel like the āvolume levelā on the need to modernize has increased. Businesses need to move fast, and they canāt do so encumbered by their legacy tech estate. Modern businesses compete to offer the best customer experiences, brand loyalty is largely dead, and the fastest movers are often the winners. This issue hits all companiesāeven the darlings of Silicon Valley and the startup unicorns of the worldābecause almost as soon as something is in production, it can be considered legacy technology and an anchor rather than an asset. The success of these companies is in constantly upgrading and refining their technology and platforms.My colleague George Earle and I have recently written a detailing the imperative to modernize as well as a plan for doing it.
Industry catches up to previous big shifts
It was obvious to us that containers (especially Docker) and container platforms (especially Kubernetes) were important from the get-go. A couple of Radars ago, we declared that Kubernetes had won the battle and was the modern platform of choice; industry now seems to agree with us. There are a phenomenal number of Kubernetes-related blips on this edition of the RadarāKnative, gVisor, Rook, SPIFFE, kube-bench, Jaeger, Pulumi, Heptio Ark and acs-engine to name but a few. These all help with the Kubernetes ecosystem, configuration scanning, security auditing, disaster recovery and so on. All these tools help us to build clusters more easily and reliably.Lingering Enterprise Antipatterns
In this edition of the Radar, many of our āHoldā entries are simply new ways to be misguided in putting together enterprise systems. We have new tools and platforms, but we tend to keep making the same mistakes. Here are a few examples:- Recreating ESB antipatterns with Kafkaāthis is the āegregious spaghetti boxā all over again, where a perfectly good technology (Kafka) is being abused for the sake of centralization or efficiency.
- Overambitious API gatewaysāa perfectly good technology for access management and rate limiting of APIs also happen to have transformation and business logic added to it.
- Data-hungry packagesāwe buy a software package to do one thing, but it ends up taking over our organization, feeding on more and more data and accidentally becoming the āmasterā for all of it, while requiring a lot of integration work too.

JavaScript community goes quiet
Weāve previously written about the churn in the JavaScript ecosystem, but the community appears to be emerging from a period of rapid growth to one with less excitement. Our contacts within the publishing industry tell us that searches for JavaScript-related content have been replaced by an interest in a group of languages led by Go, Rust, Kotlin, and Python. Could it be that has come to passāeverything that can be written in JavaScript has been written in JavaScriptāand developers have moved on to new languages? This could also be an effect of the rise of , where a polyglot approach is much more feasible, allowing developers to experiment with using the best language for each component. Either way, thereās a lot less JavaScript on our Radar in this edition.Cloud happened, and itās still happening
One of our themes on this Radar is the surprising āstickinessā of cloud providers, who are in a tight race to win hosting business and often add features and services to improve the attractiveness of their product. Using these vendor-specific features can lead to accidental lock-in, but of course, will accelerate delivery, so are a bit of a double-edged sword.Ģż
As always, we recommend going in with your eyes open and evaluating use cases, lock-in potential, and the cost and impact of needing to switch providers.
In the cloud space right now, we see more and more organizations successfully move to the public cloud, with more mature conversations and understanding around what this means. Bigger companiesāeven banks and other regulated industriesāare moving larger and more sensitive workloads to the cloud, and bringing their regulators along on the journey. In some cases, this means theyāre mandated to pursue a multi-cloud strategy for those material workloads. Many of the blips on todayās Radarāmulti-cloud sensibly, financial sympathy, and so onāare indicators that cloud is finally mainstream for all organizations and that the ālong tailā of migration is here.
Serverless gains traction, but itās not a slam dunk (yet)
āā architectures are one of the biggest trends in todayās IT landscape, but also possibly the most misunderstood. In this edition of the Radar, we actually donāt highlight any blips for serverless techāweāve done so in the past, but this time around we felt nothing quite made the cut. Thatās not to say things are quiet in the serverless space, however. Amazon recently released , something that is relatively rare for AWS services, and almost everything on the AWS platform has some sort of Lambda tie-in. The other major cloud vendors offer competing (but similar) services and tend to respond whenever Amazon makes a move in this space.Where things get tricky is if an organization simply assumes that their workload is appropriate for serverless techniques and carries on regardless, or doesnāt really do the math on whether itās better to pay for on-use functions than setting up and maintaining a dedicated server instance. Weād highlight two key areas where serverless needs to mature:
- Patterns for use: Architectural and workload models where the approach is or isnāt the right one. A better understanding is needed of how to compose an application from Serverless components as well as containers and virtual machines.
- Pricing model: Not well understood or easy to tune, leading to large bills and limited applicability. Ideally, we should compare Total Cost of Ownership including things like DevOps engineering time, server maintenance and so on.
Engineering for failure
In the past weāve highlighted Netflixā testing tools that deliberately cause failures in a production system, so you can be sure that your architecture can tolerate failure. This Chaos Engineering has become more widespread and expanded to related areas. In this Radar, we highlight the 1% Canary and Security Chaos Engineering as specific instances of engineering for failure.

Enduring good practices
As a happy counterpoint to the problems with lingering antipatterns, in this Radar, we highlight that good practices are enduring in the industry. Whenever a new technology comes along, we all experiment with it, try to figure out which use cases are the best fit and the what the limits are of what it can and canāt do. A good example of this is the recent emphasis on data and machine learning. Once that new thing has been experimented with, and weāve learnt what itās good for and what it can do, we need to apply good engineering practices to it. In the machine learning case, weād recommend applying automated testing and continuous delivery practicesācombined we call this Continuous Intelligence. The point here is that all the practices weāve developed over the years to build software well continue to be applicable to all the new things. Doing a good job with the ācraftā of software creation continues to be important, no matter how the underlying tech changes.Thatās it for this edition of Macro Trends. If youāve enjoyed this commentary, you might also like our recently re-launched podcasts series, where I am a host along with several of my ÷ČÓ°Ö±²„ colleagues. We release a podcast every two weeks covering topics such as agile data science, distributed systems, continuous intelligence and IoT. Check it out!
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of ÷ČÓ°Ö±²„.