[Column] Andy Rowland: Eight ways edge computing can future-proof your organisation

Adopting edge computing is the next important step in future-proofing your infrastructure.

By moving data processing towards the ‘edge’, you bring real-time decision-making to where it’s needed. This supports whatever capabilities will be critical tomorrow, from Internet of Things (IoT) technologies to Artificial Intelligence (AI) powered applications. 

Edge computing will be bigger than cloud computing

I’ve been working at the heart of edge computing for several years now, tracking the evolution of the technology and developing ways for industry to harness its potential. Edge computing is the new growth area, and I believe it will ultimately eclipse the take up we’ve seen for cloud.

I’ve noticed a change in how organisations are approaching data; they’re starting to think about how many versions of data they keep, as well as how they store and manage it. This is tying in with increasing concerns about the amount of energy used by data centres from a cost and sustainability point of view. Organisations are finding it makes sense to move processing close to where they’re creating and using the data.

Based on my experience, here are the top eight future-proofing benefits of adopting edge computing:

1 Ensuring business critical applications are always available

Hosting business critical applications in the cloud is a high-risk strategy because connectivity is vulnerable to interruption, for example a network cable being severed by accident. An edge computing solution supports smoother operations without disruption, even in remote areas. Reliability increases because the solution is less exposed to external interruptions and so its risk of failure falls.

This reliability, combined with the real-time processing that can support so many technologies that improve the end-user experience, can be transformative. Edge computing is an enabler for IoT technologies and AI-powered applications that unlock new, more efficient ways of operating that improve productivity.

2 Facilitating real-time decision-making

Bringing processing to the edge means data isn’t making a roundtrip to central data centres or clouds to be processed, so latency improves to the levels needed to support real-time analysis and decision-making. 

This near instant decision-making is critical to addressing so many emerging and future needs across industry – from optimising manufacturing processes and production scheduling, to running closed loop applications to optimise energy usage and reduce the carbon footprint.

3 Improving sustainability

Edge computing shifts the organisation towards more effective ways of operating that optimise energy use and reduce carbon emissions. It reduces the amount of data centre capacity needed by cutting the volumes of data sent to the core. 

In many cases, running some IT processing alongside Operational Technology (OT) processing at the edge drives efficiencies such as consolidating cooling requirements and combining maintenance visits. 

4 Reducing data and operational costs

Data is the lifeblood of global organisations and the volumes involved are increasing all the time. As data traffic grows, the costs of the bandwidth to support it are spiralling upwards, with no sign of stopping.

Continuing to send vast quantities of data to core data centres or clouds for analysis isn’t sustainable, and the costs of managing and storing this data are growing, too. Edge computing breaks these patterns, so that only intelligent, processed data needs to make the journey to the core.

5 Meeting data sovereignty regulations

Data sovereignty legislation is already rigorous, and this will continue impacting on organisations’ ability to extract value from data. Edge computing is a flexible way to stay compliant, keeping data storage and processing in-country rather than sending it out of country into a main data centre or public cloud. 

6 Supporting innovative applications

Talking to our edge computing partners, the biggest use cases they’re meeting at the moment involve private 5G networks and remote ways of bringing expertise into operating environments with Augmented Reality (AR) and Virtual Reality (VR).

It makes sense that, after tasting the possibilities during the pandemic, organisations don’t want to go back to flying experts out to locations for training or maintenance, for example. Instead, they’re using smart glasses and AR apps to guide maintenance remotely and using VR for training. Edge computing is critical to delivering the ultra-low latency these applications need.

7 Supporting the needs of remote locations

Sometimes edge is the only option. For much of the natural resources sector, cloud connectivity is either non-existent, highly limited and / or very expensive. For remote mining sites and oil fields, edge processing is often the only choice for hosting apps to reduce expensive unplanned downtime and supporting local engineers with VR training for health and safety.

Recently we’ve been approached by clients keen to improve the energy efficiency of their bulk ore carriers and LNG tankers. In both cases, cloud connectivity is very expensive as the only option is via satellite, so edge processing on the vessel to run applications to optimise the use of marine diesel is the only viable option.       

8 Supporting faster deployment of updates and in-life change requests

Edge computing delivers local processing power with central control, and this can transform the arduous process of updating local information.

Take digital signage in retail, for example. Controlled centrally, it enables consistency over the customer experience and makes it possible to change all store displays at the touch of a button. Plus, centralised, remote configuration ensures consistency by reducing the chance of missing software patches.

Andy Rowland is the Head of Digital Manufacturing at BT.

[Column] Andrew Cruise: The hidden costs of owned infrastructure versus cloud

If your business, like many others, is faced with the decision of running your own infrastructure or migrating to the cloud, you’ve likely already done your homework. You know that although the benefits cloud offers are numerous, such as increased agility and efficiency, longer-term hardware efficacy and greater security, it comes at a cost.

And, at first glance, managing your own infrastructure might seem less expensive. But it comes with hidden costs few people are aware of. Businesses usually do this cost analysis when they’re about to replace their hardware during a refresh cycle, and considering cloud versus on-premise infrastructure. The argument in favour of on-premise infrastructure is always that it’s a once-off expense, plus monthly power expenses and a salary for an engineer, but that’s it. Cloud adds up over time and amounts to a larger number. And if that’s all that’s considered, on-premise often comes out on top.

But there are several additional costs to on-premise that should be factored in. These costs mostly have to do with risk. Businesses tend not to take risks into account in their calculations because it’s so difficult to quantify.

Besides the obvious risks of having backup and recovery systems for when the power goes out, be sure to also consider these hidden costs when doing an analysis:

1.    Expertise

We always tell businesses running their own infrastructure that they need at least two competent engineers to manage it, at a cost of between R50,000 to even R100,000 per month. One might seem enough, but what happens when that one person is not available? What if they’re hospitalised or resign and you can’t replace them (immediately or even at all) because of the global skills shortage in this field? You might even have another staff member who knows just enough to do the basics, but if something disastrous happens, will your business survive that extended downtime? Due to their focus and scale, specialist cloud providers can attract and retain the best talent, to ensure their cloud infrastructure is well architected and maintained.

2.    Sufficient spec

SMEs are especially prone to ‘under-speccing’ their infrastructure due to budget constraints. A proper enterprise solution not only means having sufficient storage, power, and processing but having that well into the future as the business evolves. Then there’s disaster recovery that needs to be considered and should ideally be a second site with matching infrastructure. Because such sites can sit idle for years until an emergency, businesses tick the disaster recovery box by keeping old hardware around for this purpose. And then, should it become needed, this hardware can’t do the job. Not being able to provision a sufficiently enterprise-grade environment on your own is a business risk. 

3.    Warranties and licensing

Software and hardware warranties and software licencing or subscriptions also need to be considered. Better cloud providers make sure everything is kept under warranty, while businesses often let warranties lapse. You might have the expertise to fix some problems in-house, but what happens when you need the manufacturer’s support or need to replace faulty hardware? Extended warranties are an important, often necessary, expense.

4.    Ageing hardware

Because the cloud versus on-premise decision is usually made during a refresh cycle, decision makers can be blinded by the brand new hardware they’re considering. But this amazing hardware will only be great for a while. In two or three years the hardware will start slowing down and there’s a cost to running slow hardware. Older technology draws more power and takes up more space – not to mention the performance sacrifice. And, of course, as items age they become more prone to failure. This “ageing” problem also exists in the hyperscalers like AWS and Azure – when one reserves compute instances for 1-3 years (at a discounted rate, usually paid for up front), one is stuck on that old hardware for that period. Good Cloud providers alleviate the cost versus performance issue because these providers are constantly upgrading their equipment. This also means you’ll always have the latest technology available, promoting efficiency and encouraging innovation.

Cloud is, in a way, like an insurance payment, it mitigates all these risks by providing the expertise, volume and scale that allows you to achieve levels of availability and redundancy you can’t achieve on-premise. And, if you use a specialist local provider, you’ll always have access to telephone support and the best expertise, ensuring that any problems are quickly solved.

Andrew Cruise is the managing director at Routed.

[Column] Marilyn Moodley: Saving costs while moving to the cloud at the same time is possible

Instead, the key is optimisation through a combination of rightsizing, migrating some workloads to the cloud, and putting a strategy in place to manage future needs.

Here are some important points to consider on your cost-saving journey.

Software licence reconciliation

According to Gartner, less than 25 percent of organisations have a mature strategy for optimising their licensing spend. That’s a lot of money being left on the table. In a way, it’s normal, because most companies don’t know where to begin. A good starting point is creating an overview of your entitlements and usage situation and a comparison between the two. Because some software programmes have been used for years, it’s hard to keep track of what licences you own, what you need, and how to optimise them. Some licences may also have been purchased for a specific project that is no longer running.

As this wasn’t challenging enough, many organisations started to deploy software programs in the cloud, which come with their own set of challenges. It might be the case for you as well. You may have migrated some systems or purchased new programmes in the cloud to save costs. But many workloads in the cloud may be over-provisioned if excess computing and storage capacity, as well as excess licenses, were transferred to the cloud.

If you want to optimise your cloud spent, you will need to look at software usage right down to the employee level. For example, check when someone last logged on to a specific product. If they haven’t for some time, it might not be needed anymore and you can either reassign or terminate that licence. Having this clear view of licence spend will help you determine the strategy you need to follow to achieve further cost savings.

Rightsize, don’t downsize

After a recon, it’s time to rightsize by eliminating what is not needed anymore. Start by going through all contractual documents. Read the terms and conditions included in your agreements and understand what their impact is on your current situation. Terminate licences that are unused (shelfware) and will not be used in the future. While you won’t get your money back, you will save costs by not paying the corresponding maintenance and support costs.

But terminating isn’t the only way to save costs. Rightsizing means eliminating everything that’s not needed. This could also include:

-Support: Some products still in use might not need maintenance and support at all. You can substantially reduce your costs by cancelling support (the average cost of support and maintenance is 20 percent of the list licence cost). Keep in mind that some products, like SAP, have a general policy that all your licence estate should be under the same level of support and would only allow partial termination if that is included in your agreement.

-Adjustments: You can also adjust some licences. You could have licences that cover more functionalities than your employees need or user types that provide more rights than needed. For example, everyone in the organisation could have editor rights, but only some employees really need full functionality. Rightsize by removing premium features from some licences.

Find an independent advisor

With the complexity surrounding licences and cloud spend, finding an independent advisor could prove to be a useful investment that will save costs in the long run as your organisation needs change. Microsoft contracts, for instance, are typically three years long. Those who signed a contract in 2019 would have experienced significant changes throughout 2020 as remote work became commonplace virtually overnight. 

SoftwareONE’s Microsoft Advisory Managed Services gives companies value for their Microsoft investment through increased visibility and leading support services while providing actionable recommendations to help optimise current contracts.

In addition, Gartner’s research notes an increase in software audits for companies of all sizes and industries. The four major publishers that perform regular audits are IBM, Oracle, SAP, and Microsoft. You typically cannot avoid an audit, but you can be prepared for it to minimise costs. Having an independent software licencing firm keep track of all your licences will help you navigate the audit.

Smart investments

The next step is to invest the savings you made into funding IT asset management (ITAM) teams to help you gain more insight and achieve bigger savings.

When considering a move from on-premise to cloud, for example, you will undergo just as much a financial transformation as a digital transformation. You aren’t just moving environments – you’re shifting the mindset – and an elastic model calls for ongoing management. ITAM teams should create a plan to manage SaaS or cloud to deliver significant value to the organisation. This means a 12-month ITAM roadmap – covering everything from traditional asset management and maturing (cloud and SaaS) to early adoption assets. A well-executed 12-month roadmap should enable you to expand your ITAM team to prepare for a more complex tech landscape, start managing SaaS and cloud technologies based on where you are today and develop key strategic alliances to meet the right business outcomes.

Marilyn Moodley is the South African Country Leader for SoftwareONE.

[Column] Sumeeth Singh: CFO becomes key to organisational cloud future

The ‘boring’ stereotype of a CFO simply being a sophisticated number cruncher is giving way to one where the role combines the best of technology with a financial know-how to unlock business value in a cloud-driven world. In fact, such has the pervasiveness of technology and the cloud become, that CIOs can no longer lay claim to being the sole custodians of this responsibility. In fact, a partnership between tech and finance is crucial if a company is to stay relevant. Think of it as sneakers meet suits for a brave new world led by innovative companies. 

If anything, CFOs must become digital leaders themselves as the finance role is reinvented given how rapidly artificial intelligence, machine learning, and automation, and cloud have started to become integrated into every aspect of a business. And when you throw in the potential of real-time data analytics thanks to the high-performance compute capabilities of cloud, CFOs have a wealth of insights available to them to help shape the future business strategy. But if this is to yield maximum benefit for an organisation regardless of its size or industry sector, the partnership between CIO and CFO must be a smooth one.

Tech insights

The cloud is no longer something only the CIO needs to take responsibility for. Modern CFOs fulfil a critical role in helping get organisations cloud-ready. Their understanding of the business, its unique challenges, and where to focus efforts to enhance operations must be combined with a technology know-how and an awareness of where the evolution to the cloud can deliver the best returns. If a CIO is seen as being driven by technology, it is the CFO that needs to take that and inject it with financial analysis and insights to understand where the best return for the investment can benefit the organisation the most. 

So, moving beyond someone as just signs the cheques, the modern CFO takes their own technology understanding, combine it with input from the CIO, and then targets the best areas for the highest return on investment. There is no getting around the fact that the CFO will always be guided by the numbers. But what is different for the modern, cloud-ready organisation, is that this role is now influenced by the potential of technology and an increased willingness to explore risks (within reason) that can transform into revenue-generating opportunities.

All about the cloud

As recently as 2018, Deloitte research highlighted how CFOs are sceptical when it comes to spendings based on the promise of savings especially as how it pertains to the transition to the cloud. However, the research at the time did highlight the importance of finance needing a seat at the table when it comes to this kind of technology decision-making.

Fast forward to the present and the disruption caused by events of the past two years have illustrated the need for ‘bean counters’ and ‘tech geeks’ to work together if the organisation had any hope of surviving. Hybrid work, digital transformation, multi-and hybrid clouds, are just some of the ways in which things have evolved since the onset of the pandemic.

Perhaps more critically, companies have finally realised they can no longer afford to keep their data in siloes. If anything, it will be the CFOs and CIOs that become the stewards of that data as they work with the rest of the C-suite to bring improved agility into traditional environments.

While nobody is advocating a rip-and-replace approach to legacy solutions and infrastructure, the CFO is no longer focused on ‘sweating the asset’. Instead, they are looking at how to enhance what has been put in place through cloud-based solutions that can bridge the gap between the old and the new. The proverbial secret sauce to this lies in a cloud adoption/operating model that goes beyond just technology but holistically looks at the business overall. Being willing to look beyond crunching the numbers and apply innovative technology where it makes business sense to do so will result in a new agility being introduced to the business. Taking and improving what works and evolving what is not effective require the best efforts from both the CFO and the CIO.

The key to everything

There is no getting around the fact that the CFO is the critical cog in any successful cloud migration or adoption project. Having the finance department involved in all technology projects is no longer the challenge it was in the past. Far from becoming a bottleneck, finance can be an enabler to drive efficiencies faster. This can only happen if the CFO gets involved on the ground floor and provide the necessary input that can help shape the direction of the cloud project. 

And then when discussions turn to licensing consumption costs and the like, the CFO will be better able to make a more informed appraisal than if it is just something that drops in their lap when they need to sign off on a migration.

CFOs, therefore, need to dust off their own sneakers and start wearing them with their suits as they become more technologically informed and partner with CIOs to transform their companies into cloud-forward businesses.

Sumeeth Singh is Head: Cloud Provider Business, Sub-Saharan Africa at VMware.