Enable javascript in your browser for better experience. Need to know to enable it?

魅影直播

Blogs Banner

Macro trends in the tech industry | Nov 2019

The Technology Radar is a snapshot of things that we鈥檝e recently encountered, the stuff that鈥檚 piqued our interest. But the act of creating the Radar also means we have a bunch of fascinating discussions that can鈥檛 always be captured as blips or themes. Here鈥檚 our latest look into what鈥檚 happening in the world of software.

Race for cloud supremacy resulting in too many feature false starts

As I鈥檝e written about previously, Cloud is the dominant infrastructure and architectural style in the industry today, and the major cloud vendors are in a constant fight to build market share and gain a leg up over their competitors. This has led them to push features to the market 鈥 in our opinion 鈥 before those features and services were really ready for prime time. This is a pattern we鈥檝e seen many times over in the past, where enterprise software vendors would market their product as having more features than a competitor, whether or not those features were actually complete and available in the product. This isn鈥檛 a new problem, per se, but it is a fundamental challenge with today鈥檚 cloud landscape. It鈥檚 also not an accident 鈥 this is a deliberate strategy and a consequence of how the cloud companies have structured themselves to get software out of the door really fast.

The race by each cloud platform to deliver new products and services isn鈥檛 necessarily going to create good outcomes for the teams using them. The vendors over-promise, so it鈥檚 鈥渂uyer beware鈥 for our teams. When there鈥檚 a new cloud database or other service, it鈥檚 critical that teams evaluate whether something is actually ready for their use. Can the team live with the inevitable rough edges and limitations?

Hybrid cloud tooling starts to take shape

Many large organizations are in a 鈥渉ybrid cloud鈥 situation where they have more than one cloud provider in use. The choice to use a single provider or multiple providers is complex and involves not just technology but also commercial, political and even regulatory considerations. For example, organizations in highly regulated industries may need to prove to a regulator that they could easily move to a new cloud provider should their current provider suffer some kind of catastrophic technical or commercial problem that rendered them no longer a going concern. Some of our clients are undertaking significant cloud consolidation work to transition to a single cloud platform, because being on multiple clouds is problematic due to latency, complexity of VPN setup, a desire to consolidate in order to get better pricing from the vendor, or for cloud-specific features such as Kubernetes support or access to particular machine learning algorithms.

Such transitions or consolidations could take years, especially when you consider how legacy on-premise assets may factor into the plan, so organizations need a better way to deal with multiple clouds. A number of 鈥渉ybrid cloud control planes鈥 are springing up that may help ease the pain. We think , , and are worth looking at if you鈥檙e struggling with multiple clouds.

鈥淨uantum-ready鈥 could be next year鈥檚 strategic play

Google recently trumpeted its achievement in so-called 鈥鈥 鈥 it has built a quantum computer that can run an algorithm that would be essentially intractable on a classical computer. In this particular case, Google used a 53 qubit quantum computer to solve a problem in 200 seconds that would take a classical supercomputer 10,000 years (, and says its supercomputer could achieve the result in 2.5 days). The key point is to show that quantum computers are more than just an expensive toy in a lab, and that there are no hidden barriers to quantum computing solving important, larger-sized problems.听

For now, the problems solvable with a small number of qubits are limited in number and usefulness, but quantum is clearly on the horizon. Canadian startup is developing not just quantum chips 鈥 using a 鈥榩hotonic鈥 approach to capture quantum effects as opposed to Google鈥檚 use of superconductors 鈥 but also quantum simulation and training tools. They point out that even though most quantum algorithms today seem a bit theoretical, you can use quantum techniques to speed up problems such as Monte Carlo simulation, something that鈥檚 very useful today in fields such as FinTech.听

As with many technology shifts (big data, blockchain, machine learning) it鈥檚 important to at least have a passing familiarity with the technology and what it might do for your business. , and all provide tools to simulate quantum computers, as well as in some cases access to real quantum computing hardware. While your organization may not (yet) be able to take advantage of highly specific algorithmic speedups 鈥淨uantum-ready developer鈥 could soon become popular in the way 鈥渄ata scientist鈥 has in the past.

90% decommissioned is 0% saved

As an industry, IT constantly faces the pressure of legacy systems. If something is old, it might not be adaptable enough for today鈥檚 fast pace of change, too expensive to maintain, or just plain risky 鈥 creaky systems running on eBay鈥檇 hardware can be a big liability. As IT professionals we constantly need to deal with, and eventually retire, legacy systems. One cool-sounding approach to legacy replacement is the , where we build around and augment a legacy system, intending to eventually retire it completely. This pattern gets a lot of attention, not least due to the violent-sounding name 鈥 many people would like to do violence to some of these frustrating older systems, so you tend to get a lot of support for a strategy that involves 鈥渟trangling鈥 one of them.



The problem comes when we claim to be strangling the legacy system, but end up just building extra systems and APIs on top. We never actually retire the legacy. Our colleague Jonny LeRoy (famed for his ability to name things) suggested that we put 鈥渘eck massage for legacy systems鈥 on 鈥楬old.鈥 We felt the blip was too complex for the Radar, but people liked the message: if we plan to retire a legacy system using the strangler pattern, we better actually get around to that retirement or often the whole justification for our efforts falls apart.

Trunk-based development seems to be losing the fight

We鈥檝e campaigned for years that trunk-based development, where every developer commits their code directly to a 鈥渕ain line鈥 of source control (and does so daily or better) is the best way to create software. As someone who鈥檚 seen a lot of source code messes, I can tell you that branching is not free (or even cheap) and that even fancy code merging with tools such as Git don鈥檛 save a team from the problems caused by a 鈥溾 style of development. The usual reasons given for wanting code branches are actually signs of deeper problems with a team or a system architecture, and should be solved directly instead of using code branches. For example, if you don鈥檛 trust certain developers to commit code to your project and you use branches or pull requests as a code review mechanism, maybe you should fix the core trust issue instead. If you鈥檙e not sure you鈥檙e going to hit a project deadline and want to use branches to 鈥渃herry pick鈥 changes for a release candidate, you鈥檙e in a world of hurt and should fix your estimation, prioritization and project management problems rather than using branches to band-aid the problem.

Unfortunately, we seem to be losing the fight on this one. Short-lived branching techniques such as GitFlow continue to gain traction, as does the use of pull requests for governance activities such as code review. Our erstwhile colleague, Paul Hammant, who created and maintains has (grudgingly, I hope!) included short-lived feature branches as a recommendation for how to do trunk-based development at scale. We鈥檙e a little glum that our favored technique seems to be losing the fight, but we hope like-minded teams will continue to push for sane, trunk-based development where possible.

XR is waiting for Apple

At the recent Facebook Connect conference, but didn鈥檛 have anything specific to announce. The most recent leaks and rumors suggest that , with AR glasses planned for 2022. As with many other advances such as the smartphone and smartwatch, Apple will probably lead the way when it comes to creating really compelling experience design. Apple鈥檚 magic has always been to combine engineering advancements with a great consumer experience, and it doesn鈥檛 enter a market until it can truly do that. For a long time (and maybe still today) Apple鈥檚 have been required reading for anyone building an app. I expect a similar leap forward will be taken when Apple (eventually) get into the AR space. Until then, while we have some nifty demos and some limited training experiences, XR is going to remain a bit of a niche technology.

Machine learning continues to amaze and astonish, but do we understand it?

One of my favourite YouTube channels is in which researcher K谩roly Zsolnai-Feh茅r provides mind-blowing reporting on advances in AI systems. Recently the channel has featured AI that can , AI that can 30,000 times faster than a traditional physical simulation, and and literally breaks the rules of the game world within which it鈥檚 playing. The channel does a great job of showing the amazing (and slightly scary) advancements in narrow-AI capability, usually for problems that can be visualized and make for good videos. But machine learning is also being applied to many other fields such as business decision making, medicine, and even advising judges on sentencing criminals, so it鈥檚 important that we understand how an AI or machine learning system works.

One big problem is that although we can describe what an underlying algorithm is doing (for example how back propagation of a neural network works) we can鈥檛 explain what the network actually does once trained. This Radar features tools such as what-if and techniques such as ethical bias testing. We think that explainability should be a first-class concern when choosing a machine learning model.

Mechanical sympathy comes around again

Back in 2012, the Radar featured a concept called 鈥mechanical sympathy鈥 based on the work of the team. At a time when many software applications were being written at an increasing level of abstraction, Disruptor got closer to the metal, being tuned for extremely high performance on specific Intel CPUs. The LMAX problem was inherently single threaded, and they needed high performance from single-CPU machines. It seems like mechanical sympathy is having something of a resurgence.听

Last Radar we featured Humio, a log aggregation tool built to be super fast at both log aggregation and querying. This Radar, we鈥檙e featuring , a high performance virtual machine. We think it鈥檚 ironic that much of the progress in the software industry is getting things away from the hardware (containers, Kubernetes, Functions-as-a-Service, databases-in-the-cloud, and so on) and yet others are hyper focused on the hardware on which we鈥檙e running. I guess it depends on the use-case. Do you need scale and elasticity? Then get away from the hardware and get to cloud. Do you have a very specific use-case like high-frequency trading? Then get closer to the hardware with some of these techniques.

I hope you鈥檝e enjoyed this lightning tour of current trends in the tech industry. There are some others that I didn鈥檛 have room for, but if you鈥檙e interested in software development as a team sport, or in protecting the software supply chain, you can read about those in the .

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of 魅影直播.

Keep up to date with our latest insights