[South Africa] Netskope further expands NewEdge Security Private Cloud in Africa

Netskope has announced its continued global expansion of the Netskope NewEdge network with an additional data centre now launched and taking traffic in South Africa.

The new Cape Town deployment complements the existing Johannesburg infrastructure to address growing customer demand and provide more in-region capacity and redundancy. NewEdge is now powered by data centres in 57 regions, with everyone providing full compute for security traffic processing, the full breadth of SSE services, and accessibility to every Netskope customer without surcharges.

Grant Reynolds, Regional Manager for South Africa at Netskope, said “Every year the threat landscape becomes larger and more complex, and regulatory requirements for data protection grow, and so it is no surprise that we are seeing strong growth across Africa. Organisations are eager to adopt an SSE approach to security so that they can embrace working practices and cloud technologies that enhance productivity while ensuring their security policies follow data wherever it goes.

“In practice, however, you cannot place your security at the ‘edge’ without powerful in-region infrastructure as part of a distributed and dedicated service edge, and that is why we are investing in NewEdge around the world. NewEdge is capable of inspecting massive amounts of data locally, with no need for complex, latency-prone backhauling or reliance on unpredictable public transport.”

Netskope has been in partnership with value-added distributor Exclusive Networks Africa, throughout the continent, since late last year. Stefan van de Giessen, Exclusive Networks Africa Country Manager: SA and SADC, explains: “We investigated multiple cloud vendors to supplement our security strategy, and were attracted by Netskope’s NewEdge Network – its globally distributed private cloud network, which serves as the network foundation for the Netskope platform, as well as an enabler for current and future Netskope capabilities. We congratulate Netskope on this new development, which will enhance its leadership in the SASE space even further.”
As the infrastructure underpinning the Netskope Security Cloud, NewEdge delivers real-time inline and out-of-band API-driven services spanning Cloud Firewall (FWaaS), Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), Zero Trust Network Access (ZTNA), Cloud and SaaS Security Posture Management (CSPM/SSPM), and more.

In addition to accelerating the adoption of a Zero Trust approach, the performance-focused NewEdge architecture ensures there are no performance trade-offs when security is implemented. This approach reduces the risk of users working around security controls, productivity being negatively affected, or business processes being slowed down or impeded.

Placing data-centric security as close to the user as possible is a key requirement of a SASE-ready architecture and the delivery of world-class Security Service Edge (SSE) capabilities. NewEdge’s expanding global coverage also addresses unique enterprise business requirements and compliance objectives, such as having a data centre in-region for content localisation or to ensure customer traffic stays within intent-based zones.

In addition, every NewEdge data centre is extensively peered with the most commonly used web/CDN, cloud, and SaaS providers to deliver fast, optimised access to the content, apps, and data enterprises most care about. For example, every NewEdge data centre is directly connected with Microsoft and Google, plus other peering relationships in key regions with Amazon, Salesforce, ServiceNow, Box, and Dropbox, among many others.

Funzani Madi, Executive: Information Security (Chief Information Security Officer) for Netskope customer Telkom South Africa noted that “It is important for us that data processing occurs within our home jurisdiction, providing tangible improvements to user experience. The new Cape Town NewEdge facility further demonstrates Netskope’s commitment to the African continent.”

Danie Burger, Information Security Specialist at impact.com, a leading partnership management platform and Netskope customer, added “As a Cape Town-based organisation, having local cloud infrastructure for our security makes a material difference to our users’ experience. With offices and clients around the world, it’s imperative that we don’t create unduly lengthy data flows for security. With infrastructure located in Cape Town, and peered connections to all our main cloud services, we can introduce in-line security controls, speeding up access to cloud services.”

With more than 1,500 customers, Netskope serves some of the world’s largest and most technically demanding organisations. 

www.netskope.com

[Column] Andy Rowland: Eight ways edge computing can future-proof your organisation

Adopting edge computing is the next important step in future-proofing your infrastructure.

By moving data processing towards the ‘edge’, you bring real-time decision-making to where it’s needed. This supports whatever capabilities will be critical tomorrow, from Internet of Things (IoT) technologies to Artificial Intelligence (AI) powered applications. 

Edge computing will be bigger than cloud computing

I’ve been working at the heart of edge computing for several years now, tracking the evolution of the technology and developing ways for industry to harness its potential. Edge computing is the new growth area, and I believe it will ultimately eclipse the take up we’ve seen for cloud.

I’ve noticed a change in how organisations are approaching data; they’re starting to think about how many versions of data they keep, as well as how they store and manage it. This is tying in with increasing concerns about the amount of energy used by data centres from a cost and sustainability point of view. Organisations are finding it makes sense to move processing close to where they’re creating and using the data.

Based on my experience, here are the top eight future-proofing benefits of adopting edge computing:

1 Ensuring business critical applications are always available

Hosting business critical applications in the cloud is a high-risk strategy because connectivity is vulnerable to interruption, for example a network cable being severed by accident. An edge computing solution supports smoother operations without disruption, even in remote areas. Reliability increases because the solution is less exposed to external interruptions and so its risk of failure falls.

This reliability, combined with the real-time processing that can support so many technologies that improve the end-user experience, can be transformative. Edge computing is an enabler for IoT technologies and AI-powered applications that unlock new, more efficient ways of operating that improve productivity.

2 Facilitating real-time decision-making

Bringing processing to the edge means data isn’t making a roundtrip to central data centres or clouds to be processed, so latency improves to the levels needed to support real-time analysis and decision-making. 

This near instant decision-making is critical to addressing so many emerging and future needs across industry – from optimising manufacturing processes and production scheduling, to running closed loop applications to optimise energy usage and reduce the carbon footprint.

3 Improving sustainability

Edge computing shifts the organisation towards more effective ways of operating that optimise energy use and reduce carbon emissions. It reduces the amount of data centre capacity needed by cutting the volumes of data sent to the core. 

In many cases, running some IT processing alongside Operational Technology (OT) processing at the edge drives efficiencies such as consolidating cooling requirements and combining maintenance visits. 

4 Reducing data and operational costs

Data is the lifeblood of global organisations and the volumes involved are increasing all the time. As data traffic grows, the costs of the bandwidth to support it are spiralling upwards, with no sign of stopping.

Continuing to send vast quantities of data to core data centres or clouds for analysis isn’t sustainable, and the costs of managing and storing this data are growing, too. Edge computing breaks these patterns, so that only intelligent, processed data needs to make the journey to the core.

5 Meeting data sovereignty regulations

Data sovereignty legislation is already rigorous, and this will continue impacting on organisations’ ability to extract value from data. Edge computing is a flexible way to stay compliant, keeping data storage and processing in-country rather than sending it out of country into a main data centre or public cloud. 

6 Supporting innovative applications

Talking to our edge computing partners, the biggest use cases they’re meeting at the moment involve private 5G networks and remote ways of bringing expertise into operating environments with Augmented Reality (AR) and Virtual Reality (VR).

It makes sense that, after tasting the possibilities during the pandemic, organisations don’t want to go back to flying experts out to locations for training or maintenance, for example. Instead, they’re using smart glasses and AR apps to guide maintenance remotely and using VR for training. Edge computing is critical to delivering the ultra-low latency these applications need.

7 Supporting the needs of remote locations

Sometimes edge is the only option. For much of the natural resources sector, cloud connectivity is either non-existent, highly limited and / or very expensive. For remote mining sites and oil fields, edge processing is often the only choice for hosting apps to reduce expensive unplanned downtime and supporting local engineers with VR training for health and safety.

Recently we’ve been approached by clients keen to improve the energy efficiency of their bulk ore carriers and LNG tankers. In both cases, cloud connectivity is very expensive as the only option is via satellite, so edge processing on the vessel to run applications to optimise the use of marine diesel is the only viable option.       

8 Supporting faster deployment of updates and in-life change requests

Edge computing delivers local processing power with central control, and this can transform the arduous process of updating local information.

Take digital signage in retail, for example. Controlled centrally, it enables consistency over the customer experience and makes it possible to change all store displays at the touch of a button. Plus, centralised, remote configuration ensures consistency by reducing the chance of missing software patches.

Andy Rowland is the Head of Digital Manufacturing at BT.

[Column] Andrew Cruise: The hidden costs of owned infrastructure versus cloud

If your business, like many others, is faced with the decision of running your own infrastructure or migrating to the cloud, you’ve likely already done your homework. You know that although the benefits cloud offers are numerous, such as increased agility and efficiency, longer-term hardware efficacy and greater security, it comes at a cost.

And, at first glance, managing your own infrastructure might seem less expensive. But it comes with hidden costs few people are aware of. Businesses usually do this cost analysis when they’re about to replace their hardware during a refresh cycle, and considering cloud versus on-premise infrastructure. The argument in favour of on-premise infrastructure is always that it’s a once-off expense, plus monthly power expenses and a salary for an engineer, but that’s it. Cloud adds up over time and amounts to a larger number. And if that’s all that’s considered, on-premise often comes out on top.

But there are several additional costs to on-premise that should be factored in. These costs mostly have to do with risk. Businesses tend not to take risks into account in their calculations because it’s so difficult to quantify.

Besides the obvious risks of having backup and recovery systems for when the power goes out, be sure to also consider these hidden costs when doing an analysis:

1.    Expertise

We always tell businesses running their own infrastructure that they need at least two competent engineers to manage it, at a cost of between R50,000 to even R100,000 per month. One might seem enough, but what happens when that one person is not available? What if they’re hospitalised or resign and you can’t replace them (immediately or even at all) because of the global skills shortage in this field? You might even have another staff member who knows just enough to do the basics, but if something disastrous happens, will your business survive that extended downtime? Due to their focus and scale, specialist cloud providers can attract and retain the best talent, to ensure their cloud infrastructure is well architected and maintained.

2.    Sufficient spec

SMEs are especially prone to ‘under-speccing’ their infrastructure due to budget constraints. A proper enterprise solution not only means having sufficient storage, power, and processing but having that well into the future as the business evolves. Then there’s disaster recovery that needs to be considered and should ideally be a second site with matching infrastructure. Because such sites can sit idle for years until an emergency, businesses tick the disaster recovery box by keeping old hardware around for this purpose. And then, should it become needed, this hardware can’t do the job. Not being able to provision a sufficiently enterprise-grade environment on your own is a business risk. 

3.    Warranties and licensing

Software and hardware warranties and software licencing or subscriptions also need to be considered. Better cloud providers make sure everything is kept under warranty, while businesses often let warranties lapse. You might have the expertise to fix some problems in-house, but what happens when you need the manufacturer’s support or need to replace faulty hardware? Extended warranties are an important, often necessary, expense.

4.    Ageing hardware

Because the cloud versus on-premise decision is usually made during a refresh cycle, decision makers can be blinded by the brand new hardware they’re considering. But this amazing hardware will only be great for a while. In two or three years the hardware will start slowing down and there’s a cost to running slow hardware. Older technology draws more power and takes up more space – not to mention the performance sacrifice. And, of course, as items age they become more prone to failure. This “ageing” problem also exists in the hyperscalers like AWS and Azure – when one reserves compute instances for 1-3 years (at a discounted rate, usually paid for up front), one is stuck on that old hardware for that period. Good Cloud providers alleviate the cost versus performance issue because these providers are constantly upgrading their equipment. This also means you’ll always have the latest technology available, promoting efficiency and encouraging innovation.

Cloud is, in a way, like an insurance payment, it mitigates all these risks by providing the expertise, volume and scale that allows you to achieve levels of availability and redundancy you can’t achieve on-premise. And, if you use a specialist local provider, you’ll always have access to telephone support and the best expertise, ensuring that any problems are quickly solved.

Andrew Cruise is the managing director at Routed.

[Column] Marilyn Moodley: Saving costs while moving to the cloud at the same time is possible

Instead, the key is optimisation through a combination of rightsizing, migrating some workloads to the cloud, and putting a strategy in place to manage future needs.

Here are some important points to consider on your cost-saving journey.

Software licence reconciliation

According to Gartner, less than 25 percent of organisations have a mature strategy for optimising their licensing spend. That’s a lot of money being left on the table. In a way, it’s normal, because most companies don’t know where to begin. A good starting point is creating an overview of your entitlements and usage situation and a comparison between the two. Because some software programmes have been used for years, it’s hard to keep track of what licences you own, what you need, and how to optimise them. Some licences may also have been purchased for a specific project that is no longer running.

As this wasn’t challenging enough, many organisations started to deploy software programs in the cloud, which come with their own set of challenges. It might be the case for you as well. You may have migrated some systems or purchased new programmes in the cloud to save costs. But many workloads in the cloud may be over-provisioned if excess computing and storage capacity, as well as excess licenses, were transferred to the cloud.

If you want to optimise your cloud spent, you will need to look at software usage right down to the employee level. For example, check when someone last logged on to a specific product. If they haven’t for some time, it might not be needed anymore and you can either reassign or terminate that licence. Having this clear view of licence spend will help you determine the strategy you need to follow to achieve further cost savings.

Rightsize, don’t downsize

After a recon, it’s time to rightsize by eliminating what is not needed anymore. Start by going through all contractual documents. Read the terms and conditions included in your agreements and understand what their impact is on your current situation. Terminate licences that are unused (shelfware) and will not be used in the future. While you won’t get your money back, you will save costs by not paying the corresponding maintenance and support costs.

But terminating isn’t the only way to save costs. Rightsizing means eliminating everything that’s not needed. This could also include:

-Support: Some products still in use might not need maintenance and support at all. You can substantially reduce your costs by cancelling support (the average cost of support and maintenance is 20 percent of the list licence cost). Keep in mind that some products, like SAP, have a general policy that all your licence estate should be under the same level of support and would only allow partial termination if that is included in your agreement.

-Adjustments: You can also adjust some licences. You could have licences that cover more functionalities than your employees need or user types that provide more rights than needed. For example, everyone in the organisation could have editor rights, but only some employees really need full functionality. Rightsize by removing premium features from some licences.

Find an independent advisor

With the complexity surrounding licences and cloud spend, finding an independent advisor could prove to be a useful investment that will save costs in the long run as your organisation needs change. Microsoft contracts, for instance, are typically three years long. Those who signed a contract in 2019 would have experienced significant changes throughout 2020 as remote work became commonplace virtually overnight. 

SoftwareONE’s Microsoft Advisory Managed Services gives companies value for their Microsoft investment through increased visibility and leading support services while providing actionable recommendations to help optimise current contracts.

In addition, Gartner’s research notes an increase in software audits for companies of all sizes and industries. The four major publishers that perform regular audits are IBM, Oracle, SAP, and Microsoft. You typically cannot avoid an audit, but you can be prepared for it to minimise costs. Having an independent software licencing firm keep track of all your licences will help you navigate the audit.

Smart investments

The next step is to invest the savings you made into funding IT asset management (ITAM) teams to help you gain more insight and achieve bigger savings.

When considering a move from on-premise to cloud, for example, you will undergo just as much a financial transformation as a digital transformation. You aren’t just moving environments – you’re shifting the mindset – and an elastic model calls for ongoing management. ITAM teams should create a plan to manage SaaS or cloud to deliver significant value to the organisation. This means a 12-month ITAM roadmap – covering everything from traditional asset management and maturing (cloud and SaaS) to early adoption assets. A well-executed 12-month roadmap should enable you to expand your ITAM team to prepare for a more complex tech landscape, start managing SaaS and cloud technologies based on where you are today and develop key strategic alliances to meet the right business outcomes.

Marilyn Moodley is the South African Country Leader for SoftwareONE.

[Column] Sumeeth Singh: CFO becomes key to organisational cloud future

The ‘boring’ stereotype of a CFO simply being a sophisticated number cruncher is giving way to one where the role combines the best of technology with a financial know-how to unlock business value in a cloud-driven world. In fact, such has the pervasiveness of technology and the cloud become, that CIOs can no longer lay claim to being the sole custodians of this responsibility. In fact, a partnership between tech and finance is crucial if a company is to stay relevant. Think of it as sneakers meet suits for a brave new world led by innovative companies. 

If anything, CFOs must become digital leaders themselves as the finance role is reinvented given how rapidly artificial intelligence, machine learning, and automation, and cloud have started to become integrated into every aspect of a business. And when you throw in the potential of real-time data analytics thanks to the high-performance compute capabilities of cloud, CFOs have a wealth of insights available to them to help shape the future business strategy. But if this is to yield maximum benefit for an organisation regardless of its size or industry sector, the partnership between CIO and CFO must be a smooth one.

Tech insights

The cloud is no longer something only the CIO needs to take responsibility for. Modern CFOs fulfil a critical role in helping get organisations cloud-ready. Their understanding of the business, its unique challenges, and where to focus efforts to enhance operations must be combined with a technology know-how and an awareness of where the evolution to the cloud can deliver the best returns. If a CIO is seen as being driven by technology, it is the CFO that needs to take that and inject it with financial analysis and insights to understand where the best return for the investment can benefit the organisation the most. 

So, moving beyond someone as just signs the cheques, the modern CFO takes their own technology understanding, combine it with input from the CIO, and then targets the best areas for the highest return on investment. There is no getting around the fact that the CFO will always be guided by the numbers. But what is different for the modern, cloud-ready organisation, is that this role is now influenced by the potential of technology and an increased willingness to explore risks (within reason) that can transform into revenue-generating opportunities.

All about the cloud

As recently as 2018, Deloitte research highlighted how CFOs are sceptical when it comes to spendings based on the promise of savings especially as how it pertains to the transition to the cloud. However, the research at the time did highlight the importance of finance needing a seat at the table when it comes to this kind of technology decision-making.

Fast forward to the present and the disruption caused by events of the past two years have illustrated the need for ‘bean counters’ and ‘tech geeks’ to work together if the organisation had any hope of surviving. Hybrid work, digital transformation, multi-and hybrid clouds, are just some of the ways in which things have evolved since the onset of the pandemic.

Perhaps more critically, companies have finally realised they can no longer afford to keep their data in siloes. If anything, it will be the CFOs and CIOs that become the stewards of that data as they work with the rest of the C-suite to bring improved agility into traditional environments.

While nobody is advocating a rip-and-replace approach to legacy solutions and infrastructure, the CFO is no longer focused on ‘sweating the asset’. Instead, they are looking at how to enhance what has been put in place through cloud-based solutions that can bridge the gap between the old and the new. The proverbial secret sauce to this lies in a cloud adoption/operating model that goes beyond just technology but holistically looks at the business overall. Being willing to look beyond crunching the numbers and apply innovative technology where it makes business sense to do so will result in a new agility being introduced to the business. Taking and improving what works and evolving what is not effective require the best efforts from both the CFO and the CIO.

The key to everything

There is no getting around the fact that the CFO is the critical cog in any successful cloud migration or adoption project. Having the finance department involved in all technology projects is no longer the challenge it was in the past. Far from becoming a bottleneck, finance can be an enabler to drive efficiencies faster. This can only happen if the CFO gets involved on the ground floor and provide the necessary input that can help shape the direction of the cloud project. 

And then when discussions turn to licensing consumption costs and the like, the CFO will be better able to make a more informed appraisal than if it is just something that drops in their lap when they need to sign off on a migration.

CFOs, therefore, need to dust off their own sneakers and start wearing them with their suits as they become more technologically informed and partner with CIOs to transform their companies into cloud-forward businesses.

Sumeeth Singh is Head: Cloud Provider Business, Sub-Saharan Africa at VMware.

BT launches next-generation multi-cloud connectivity solution

BT today announced the launch of a next-generation cloud connectivity solution designed to accelerate customers’ digital transformation.

Called Connected Cloud Edge, it extends the company’s global network into strategic carrier-neutral facilities (CNFs). This gives customers access to a wide range of third-party cloud-based applications and services without having to provision individual connections to each of them. 

It builds on BT’s partnership with Equinix, the world’s digital infrastructure company™. Equinix hosts major cloud and software-as-a-service providers within diverse digital business ecosystems at its facilities around the world.

The solution will initially be available at 13 CNFs and will be customisable with multi-cloud routing services and additional capabilities, such as SD-WAN and firewalls, further augmenting services already available from BT.

BT and Equinix are marking the launch with the publication of a report by IDC, What Digital Leaders Know About Cloud Interconnectivity and Ecosystems Development. It analyses how cloud networking is evolving to reflect a shift to cloud-native and multi-cloud digital ecosystems and the approach companies have made in adopting the technology. 

“Connected Cloud Edge will remove the complexity of sourcing and managing individual connections to the services underpinning customers’ digital transformation,” said Hriday Ravindranath, chief product and digital officer, Global, BT. “To do this, we’re pre-integrating BT’s network with Equinix Fabric™ to provide a fully managed multi-cloud solution.” 

“We’re delighted to be innovating with our long-standing partner BT, and excited for the launch of Connected Cloud Edge,” said Jules Johnston, senior vice president, Global Channel, Equinix. “To ensure businesses are ready for whatever the future might bring, they need their enterprise networks to be tightly integrated into platforms that connect the world’s densest ecosystems of cloud and technology providers. BT’s new solution offers companies the ability to build and evolve their multi-cloud strategies as they transition to cloud-centric architectures with the agility and resiliency they demand.”

www.globalservices.bt.com

Dimension Data East Africa: Embracing the skills of those born in the cloud

How to mitigate the risk of the massive brain drain that has become a perennial problem in Africa is something that keeps many business and technology leaders up at night. Investing in skills development just to see talent moving abroad is not only frustrating but also costly.

“Organisations across East Africa have embraced the cloud to help drive business growth. They are spending time and money training people and giving them exposure to new technologies and platforms. But because of the global demand for these skills, many are leaving Africa to pursue work in the United States and Europe,” says Andrew Ngunjiri, Practice Manager: Intelligent Infrastructure at Dimension Data East Africa.

Training differently

Those legacy organisations who used to model their businesses around the ‘I build, I own, I run’ approach have realised this is no longer sustainable. More hubs have sprung up that provide young people with access to coding and DevOps skills. In many respects, this is creating a mill for talent seeking their fortunes elsewhere. This exodus is now fuelled by companies refocusing the areas in which they are upskilling and reskilling talent.

“Discussions have turned to how companies can consciously retain skills locally and incentivise people not to be lured to international markets. As such, there has been a surge in home-grown skills development programmes built around this,” says Ngunjiri.

According to Lee Syse, Senior Cloud Solutions Architect for Sub-Sahara Africa at VMware, it has been interesting to see how the types of skills that are in demand have changed.

“Traditional organisations have spent a lot of time focusing on the build portion of the legacy approach. Typically, this consists of people specialising in any of the infrastructure, networking, and storage pillars. Today, there is less of a focus on these areas of specialisation and more of a demand for generalists. Partners like Dimension Data are focused on leveraging these generalists to manage all aspects of the cloud,” he says.

A generalist world

Ngunjiri questions whether African companies are doing enough to create future technologists who can be generalists. 

“I suppose it depends on where in the value chain the organisation sits. Few companies are involved in the building phase as they can merely leverage the cloud environments that have already been created. We see a demand for transferable skills, with coding being a great example. People learn the ability to code as a fundamental skill. They can then transfer that to work on any cloud platform,” he says.

There is also a convergence taking place inside organisations. They no longer have several specialist teams but rather a central team that is focused on delivering a specific organisational outcome. 

“In the cloud environment, it is no longer about speeds and feeds. Attention is now on how to create a platform that enables businesses to reach their objectives. Companies used to value hardcore technology certifications. Now, those individuals with operational management skills are the most in-demand,” says Ngunjiri. 

More adaptive

Syse agrees and says that this is putting pressure on educational institutions to change as well. 

“Talented people who are going through this transformation process are playing a catch-up game. Those who used to be specialists in one area must now start at a basic level in another area. It can be quite overwhelming to move between these stages of development as there is a lot of training to be done in various areas,” he says.

Syse cites AWS certifications as an example.

“It really comes down to just knowledge-sharing as no person can be a specialist in all the AWS areas available. Companies also need to think differently about certification. It is less about the piece of paper a person has and more about their ability to execute on what is required.”

Ngunjiri echoes this sentiment.

“It is no longer about the piece of paper that shows what a person can do. For a future-forward organisation, it is using people with executable skills who can make things work. Of course, the demand for skills is very much guided by the industry sector in which a person operates in. But people will always gravitate towards where the demand is the highest to identify where to upskill themselves,” says Ngunjiri.

Unfortunately, traditionally-minded business leaders are still too slow around this mindset change. They must be willing to look beyond purely academic knowledge and factor in a person’s practical skills. Fortunately, the new generation coming through is born in the cloud. And with them, a new way of doing things will follow.

Skills development

“Much of it comes down to building skills that can cater to future growth instead of simply buying skills in demand. Today’s skills are so diverse they cannot be shopped off the street. Talent must be brewed internally, which makes those people high in demand,” says Syse.

“And with that, the way organisations retain their most valuable skills will be unique to each of them as they continue to fight the war against the brain drain,” concludes Ngunjiri.

www.dimensiondata.com

Vodafone Partners with Oracle to Accelerate Technology Modernization on Oracle Cloud Infrastructure

Oracle and Vodafone, the largest pan-European and African technology communications company announced a strategic partnership to modernize the operator’s European IT infrastructure and accelerate its transition to the cloud.

Under the multi-year agreement, Vodafone will modernize and migrate a large number of its systems to OCI Dedicated Region, a fully managed cloud region that brings all of Oracle’s public cloud services into Vodafone’s own network and data centers. This will provide a dedicated cloud platform for Vodafone to modernize its thousands of Oracle databases as well as to support and scale its mission-critical OSS and BSS systems, including CRM and order management. 

The implementation will also enable Vodafone to build new cloud-based applications faster, and by taking advantage of its geographical scale, launch them in multiple markets at the same time. 

Oracle will deploy OCI Dedicated Regions in Vodafone’s main data centers that manage its European IT and network operations. The deployment of public cloud services directly inside Vodafone’s own network and data centers will enable the operator to flexibly modernize, manage and automate its critical systems using new technologies such as autonomous services, and more easily meet the latency and performance requirements of these applications. 

Vodafone will also have close access to compute resources that enable it to dynamically augment and scale services in multiple geographies according to changing business requirements, while reducing operational costs and meeting data residency regulations.

The partnership supports Vodafone’s multi-year initiative to consolidate and modernize the technology infrastructure that supports its mission-critical systems into a shared, on-premises, open-standard platform capable of supporting and scaling next-generation digital services. It will also help Vodafone further its Tech 2025 goals: reducing time-to-market of its services, providing stand-out customer experiences through always-on services, and reducing operational costs through automation.

“As Vodafone focuses on growth, data is key to how we evolve our business, build new capabilities and innovate to meet the needs of our customers. Our collaboration with Oracle supports our vision of becoming a technology communications company,” said Scott Petty, Chief Digital & IT Officer, Vodafone. “The agreement enables Oracle to bring its entire portfolio of cloud services directly into Vodafone data centers. This includes the same architecture, software, services and control plane used in OCI public cloud. The flexibility offered by OCI enables us to build a robust, secure, and extensible cloud platform in our own data centers, while also providing the operational agility and scalability required to support the growth and diversification of our business.”

“Telecom companies are reimagining their business models to innovate and monetize new opportunities at speed and at scale,” said Clay Magouyrk, executive vice president, Oracle Cloud Infrastructure. “Vodafone is at the forefront of this thinking, and we are excited to bring the power of OCI to Vodafone’s data centers to support the company and its partners as they fast-track this vision and deliver the next generation of connected services.” 

“Now more than ever, telecom companies need to quickly adopt new technologies to deliver new innovative products at speed while continuing to meet evolving regulatory requirements. Our partnership with Vodafone is based on achieving this balance, providing a cloud platform that enables Vodafone to modernize and consolidate its existing infrastructure while also building a foundation for a digital future. We are looking forward to partnering with one of the telecom sector’s digital trailblazers as we help shape the next generation of communication services and business models,” said Jonathan Tikochinsky, executive vice president, global strategic clients, Oracle.

www.vodafone.com

www.oracle.com

[Column] Andrew Ngunjiri: The state of the cloud in Africa – A partner’s perspective

The cloud in Africa is undergoing massive transformation and acceleration. There has been a huge uptake in cloud services, especially when it comes to SMEs turning towards hyperscalers. Meanwhile, more prominent organisations and governments have been embracing the private cloud. 

An EY study has highlighted a new wave of investments spreading across Africa centred on companies migrating to the cloud as they look at becoming more efficient while reducing their operational costs. Closer to home, the Kenyan market has always been one of the largest adopters of technology in the region.

Therefore, it is not surprising that there has been a significant interest in cloud services by both the public and private sectors here. Additionally, the public sector and the financial services industry have been vocal about investing in the private cloud to cater to their specific requirements.

This has provided the impetus for many hyperscalers to look at opening operations in Kenya instead of purely relying on their regional offices in South Africa, the United States, and Europe to service the region’s demands for cloud computing services.

 Cloud-enabled

Events of the past two years have made it virtually impossible for people to move around. The cloud has therefore become an essential tool for businesses to survive.

Beyond this, there are three reasons why the cloud has become a critical building block for the region. Firstly, it provides the business agility necessary to remain competitive. Secondly, the cloud helps to address any security and compliance concerns resulting from a rapidly evolving regulatory environment. And thirdly, the cloud injects a level of performance and operational efficiency not previously possible.

Even though public and private cloud models provide benefits, we anticipate the hybrid cloud model to win the race for massive adoption. We are already seeing hybrid becoming the natural progression of cloud adoption in the region, with many organisations and governments opting for this model. 

It comes down to a simple matter of practicality. When one looks at the cloud, applications are a massive driver behind its adoption. However, not every application is optimised for the cloud. This means companies must carefully review which ones make sense to move to the cloud and which ones must be kept on-premises.

Another factor impacting the decision to move to hybrid is the strong drive towards compliance, especially data protection. There has been a massive push in Kenya regarding this, with significant investments being made to ensure companies adhere to regulatory requirements. Having already invested in the private cloud, going hybrid means businesses can leverage shared services and infrastructure far more cost-effectively while maintaining compliance.

Navigating obstacles

This does not mean that companies do not face obstacles when it comes to migrating to the cloud. One of the significant ones relates to adoption and IT transformation. There is a huge challenge when it comes to keeping up with developments in this space. Organisations need to manage shorter development cycles and overcome their concerns around controlling costs and mitigating risks.

Because not all applications are cloud-optimised, going about modernising them can add to the complexity of the migration. There is also a reduction in IT budgets to consider. Across the board, companies in the region are seeing a change in ownership take place when it comes to these budgets, which are now moving from the CIO into the rest of the business. Practically, retaining and attracting the right skills for cloud adoption is an ongoing problem.

Organisations must also be constantly vigilant regarding security and compliance as driven by the various regulatory institutions. Additionally, the infrastructure must meet the performance requirements of a cloud-driven environment. Fortunately, Kenya has seen ongoing investments in infrastructure pay off to mitigate concerns around having access to fast and reliable connectivity.

Another obstacle to consider is how a company can derive the maximum benefit from the data it has at its disposal. With data being the new currency, many businesses need to understand how best to unlock the potential of their data.

Digital building blocks

Putting the building blocks in place for a successful digital transformation plan that can simplify the cloud transition is critical. An organisation needs to have end-to-end service capabilities in place. Discussions around the cloud and digital transformation have all centred on how to enable this service.

Companies also need cloud expertise. Some skills are transferrable, while others are not. A platform approach to discovery, management, and development across multiple technologies forms part of this discussion. It entails balancing between upskilling existing resources and using trusted third parties.

Throughout this, cost optimisation becomes essential if organisations are to be more efficient around their IT spending and reduce the total ownership cost.

Commercial models must become more flexible. The consumption has changed from Capex to Opex. Therefore, business and technology leaders need flexibility both in terms of their mindset and the business’s operational model to fully align to a hybrid cloud model.

 Yet, the cloud has proven its value to the region, and it will only contribute to accelerated efficiencies. But for this to happen, organisations need to be more open and adaptive to change to ensure they can future-proof their operations.

Andrew Ngunjiri is the Practice Manager: Intelligent Infrastructure at Dimension Data East Africa

SAS’ cloud-first portfolio soars with customer success, industry solutions and strategic partners

SAS boldly stakes its future on a powerful cloud analytics platform and AI-driven, cloud-first industry solutions. It’s the analytics and AI leader’s cloud-first approach that eases customers’ digital transformations. And SAS’ cloud momentum is building, where despite the pandemic’s pressure and uncertainty, SAS’ global cloud revenue jumped 19% in 2021. With results like this, SAS is deepening its broad industry portfolio with solutions that support life sciences, energy and martech.

According to McKinsey & Company, 70% of companies using cloud technology plan to increase their cloud budgets. The public cloud computing market is projected to grow to $800 billion by 2024, with implementations across all industries – retail, media, telecom, education, banking, insurance and more.

What’s driving this investment? Forrester Consulting’s new Total Economic Impact study, shows organisations deploying SAS® Viya® on Microsoft Azure can see significant returns in as little as 14-months. In fact, one company more than tripled its investment in three years, results that will be explored during the May 18 webinar, Driving 204% ROI With SAS Viya On Microsoft Azure.

Journey to the cloud

SAS’ cloud-first transformation didn’t happen overnight. Using its decades-long legacy as a bridge to the future, step No. 1 was to develop SAS® Viya® as a cloud-first analytics platform. That endeavour ran in sync with ongoing strategic partnership investments, including Microsoft Azure. More recently, SAS joined new partner Cosmo Tech to fortify its digital twin simulation capabilities.

“We transformed our portfolio to be cloud-native and cloud-portable so customers can accelerate their move to the cloud and expand their use of analytics, machine learning and AI,” said Bryan Harris, SAS Executive Vice President and Chief Technology Officer. “At the end of the day, we want our platform and industry solutions to be a critical part of every customer’s analytic innovation.”

Because SAS is both cloud-first and cloud-agnostic, helping customers manage the complexity of analysing intense data in the cloud is second nature. “Our customers don’t need to stress about data complexity or the details of running analytic workloads in the cloud, because SAS gives them the expertise they need,” said Jay Upchurch, SAS Executive Vice President and Chief Information Officer – who also leads the SAS cloud business. “SAS analytics in the cloud gives our customers a distinct advantage, whether they’re using SAS Viya or an industry solution.”

In 2021, SAS realised the most cloud revenue growth from customers Asia Pacific (48% growth) and EMEA (29% growth) – and SAS’ commitment to cloud and AI innovation lives in its customers’ successes.  

SAS Hackathon teams innovate with SAS in the cloud

The SAS Hackathon is an incubator for innovation and a test bed for AI in the cloud. Hackathon teams use SAS Viya on Microsoft Azure, along with open-source tools, to help solve some of the world’s toughest social and economic challenges.

This year’s SAS Hackathon included a team of eight members from South African who competed alongside 69 other team, representing 135 organisations and 75 countries.

The team from South Africa leveraged advanced augmented intelligence to build a solution that included creating models that could be trained to identify lung disease using digital X-rays. Even with limited data sets available the models were able to achieve diagnostic accuracy above 90% – and while for the duration of the Hackathon the team focused on lung diseases, the models have the potential to be scaled up to include other body parts for the same purpose of quick identification, diagnosis and treatment prioritisation.

www.sas.com