Categories
Cybersecurity

What is disaster recovery?

Many organizations underestimate the need for disaster recovery strategies for their cloud-based applications. However, those that do understand the issues sometimes struggle to put effective plans in place.

Unlike the completion of simple IT tasks, these plans require close collaboration and a definite commitment from multiple parties to complete. Many IT services now rely on multiple application components, some of which may run in the cloud and others in data centers. Building an effective disaster recovery plan therefore requires a structured, cross-functional approach that focuses on the resilience of IT services as a whole, not just individual workloads.

Answering the tough questions

To address disaster recovery planning, companies need to question their approach, even if it raises uncomfortable questions. This process is particularly useful because by raising the gaps, companies will be able to redirect efforts and stimulate stakeholders who have been overlooking the risks.

When a workload fails, the service it supports is interrupted, impacting user productivity and tainting customer confidence. Restoring the service requires a certain amount of coordination, and above all it must be carried out quickly to limit the extent of the damage. It is important to remember that it is the responsibility of companies (not cloud service providers) to ensure that disaster recovery procedures are in place.

Developing a disaster recovery plan

Effective disaster recovery planning begins with an assessment of the impact of downtime on the business. This cross-functional exercise identifies all the IT services used by the business, determines the impact (operational and financial) that a service outage could have, and therefore the disaster recovery requirements for each service. Many IT organizations maintain a service catalog and configuration management database (CMDB) to simplify the process of identifying a comprehensive list of IT services. In the absence of such a catalog, the inventory must be established through a discovery process.

In order to determine the level of requirement for disaster recovery, it is useful to consider two critical metrics: recovery time objective (RTO) and recovery point objective (RPO). The RTO represents the amount of downtime (usually measured in hours, days or weeks) that the business can tolerate for a given IT service. The RPO, on the other hand, is the amount of data loss (usually between almost zero and a few hours) that the company can accept for each of those same services.

In practice, there is often a trade-off between these two objectives: for example, IT services may be restored quickly, but experience greater data loss and vice versa. Logically, demanding RTOs and RPOs usually require more expensive technology solutions.

Dependency Mapping and Technology Assessment

After determining the RTOs, RPOs, and the impact that a termination may have on individual IT services, the next step is to understand all of the IT application components on which they depend. Creating a dependency mapping for each IT service will ensure that the appropriate recovery measures are in place for all necessary application components, whether they are running in the data centers or in the cloud.

Next, organizations should assess their data protection and resiliency capabilities for each application, including whether they can consider RTOs and RPOs collectively. This assessment should be done holistically, taking into account the impact of the most severe outage. For example, the right technology may already be in place to recover a single application within the required recovery time, but does that technology currently recover dozens, hundreds or even thousands of applications in parallel? Can organizations use the same technical solutions in the data center as they do in the cloud? The need for multiple tools will undoubtedly complicate recovery procedures. After assessing current technology capabilities, organizations can then identify additional technical solutions to fill the gaps.

Document and test recovery steps

While deploying the right recovery tools is critical, technology alone is not enough to ensure disaster recovery. A critical step is to create a hierarchical set of recovery plans that can be used to guide the business through the recovery process. Higher-level plans will document how recovery activities are coordinated, while lower-level plans will include step-by-step procedures to ensure the recovery of each IT service. Developing and maintaining these plans is a significant investment, but they are essential to ensuring effective recovery from a major incident.

To ensure that the plans will work well in practice, they must be tested regularly. Testing should be done at least once a year, and even more frequently for critical applications. They can also be a risk of incident if they involve the use of live data. However, testing is an essential part of disaster recovery planning that should not be ignored.

Building Resilience

The public cloud offers enterprises a highly scalable and resilient platform for hosting workloads. When used properly, it can strengthen the resiliency of IT departments. However, adopting the public cloud does not relieve the enterprise of its responsibility for service availability and disaster recovery. While the cloud offers many building blocks to support a recovery strategy, organizations must use them in combination with other technologies and procedures to develop a cohesive plan.

Achieving multicloud resiliency requires a holistic approach around data assets, some elements of which are in common with the disaster recovery process. Disaster recovery in the multicloud raises other issues around where data is stored. Existing dependencies and how data and workloads can be recovered in the event of an adverse situation with the cloud provider.

The objective of disaster recovery planning and testing is to ensure that recovery is possible in accordance with RPO and RTO objectives. In particular, this will provide assurance to customers – both internal and external – of the enterprises that they will not be affected in the event of downtime.

Categories
Cloud computing

How to optimize the cloud?

Cloud : how to optimize its use to limit its impact ? When we think of climate change, we tend to imagine beaches covered in garbage, black smoke from factories, or the massive deforestation of the Amazon. But some of the damage to the planet is much less visible, like the damage done by the digital sector. The digital sector alone accounts for nearly 4% of global carbon emissions, twice as much as global civil aviation. And given our growing and increasingly energy-intensive digital uses, this figure is expected to jump by 60% by 2040.

If, until now, we have chosen to ignore the emissions generated by IT by focusing on its benefits, such as the reduction in the amount of paper used within companies or the ability to work and meet remotely, we must react.

So any company that wants to be environmentally conscious today needs to make sure that its digital uses are more sustainable. And one of the answers is the cloud.

Cloud players already on the move toward responsible digital

While the use of the cloud is indeed based on physical infrastructures such as datacenters, these are becoming increasingly energy efficient. Indeed, servers are becoming more and more efficient and consume less energy. There are many reasons for this, but one in particular stands out: the location of data centers. Indeed, countries like Sweden, with hydroelectricity, or france, with nuclear power, produce much lower carbon energy than most of their European neighbors.

Major providers such as Google Cloud Platform, Amazon Web Services and Microsoft Azure have also all set clear renewable energy targets, mainly through so-called Power Purchase Agreements (PPAs). These power purchase agreements dedicated to companies, SMEs and local authorities provide access to green and reliable energy for many years while promoting the energy transition.

However, even if we are indeed moving in the right direction, it is often difficult to compare these promises with the rigorous GHG protocols of audits, and the various reports that appear on carbon emissions. How can we make sense of this?

How can we optimize the cloud?

There are several ways that companies and cloud providers can work together to reduce their emissions and get the most out of the technology. First, companies can work directly with their providers, or even specialized agencies, to conduct an audit and estimate their current emissions. The important thing is to get a complete picture of how much carbon is being generated by the company’s activities.

Second, there are two ways to optimize the use of cloud resources. The first is to go back to basics by changing the code itself. It’s important to keep in mind that while optimizing a few lines of code may seem trivial, the overall effect can be very powerful, especially when we’re talking about hundreds or thousands of websites for some multinationals. The other way is to optimize the way the cloud deployment is done. With certain operational optimizations, such as continuous deployment, it is possible to use fewer resources for the same application.

Finally, the cloud also offers the ability to move data and services anywhere there is a suitable datacenter. So when launching a new project, companies should be able to choose the datacenter located on the network with the lowest carbon intensity. For example, a data center in Sweden that uses renewable energy can emit up to 10 times less CO2 per kWh than one powered by coal-fired electricity in Germany.

Ultimately, the crux of optimizing a company’s digital impact lies in transparency, and especially access to information. However, while cloud providers are beginning to provide some carbon emissions data, these figures are generally based on the global market and do not include location information. The revolution is underway, but much progress remains to be made.

Categories
Cybersecurity

Will Microsoft really cut off your security updates in Windows 11?

Windows 11: Will Microsoft really cut your security updates? Microsoft’s strict compatibility requirements for Windows 11 mean that a significant number of PC owners will soon be unable to upgrade to Windows 11, even on relatively new hardware. The Redmond firm has made it clear in recent weeks that installing Windows 11 on an unsupported PC means that it will not be able to receive updates in the future. This raises the question: if you perform a clean installation of Windows 11 on an incompatible PC, will your PC be deprived of monthly security updates in the future?

Learn about the brand new features and improvements in Microsoft’s new operating system, Windows 11, unveiled on Thursday, June 24, 2021 and available since October 5, 2021.

  • Downloads: 15
  • Release date: 20/09/2022
  • Author: Microsoft
  • License: Commercial license
  • Categories :
    Operating System
  • Operating system: Windows

Windows 11 Professional is the business and education version of the brand new Windows 11 operating system. Discover its advanced features.

  • Downloads: 26
  • Release date: 20/09/2022
  • Author: Microsoft
  • License: Commercial license
  • Categories :
    Operating System
  • Operating system: Windows

To answer this question, let’s take a detour through marketing theories. Have you ever heard of FUD? This acronym, which translates into French as “fear, uncertainty and doubt”, has been around for a long time, but it was popularized in the 1970s to describe the way the giant IBM discouraged its customers from considering competing products.

Today, FUD has become a classic marketing technique used when there is no valid technical argument against the choice the customer is considering. However, it is strange to see Microsoft using it, and thus confusing, to discourage customers from installing one of its own products. In the words of the American giant: installing Windows 11 on an unsupported PC is not recommended and can lead to compatibility problems. “If you proceed with the installation of Windows 11, your PC will no longer be supported and will not be able to receive updates. Damage to your PC due to lack of compatibility is not covered by the manufacturer’s warranty,” Microsoft says.

Translate: this doesn’t really say that Microsoft will cut off your access to updates, but simply that you are no longer “entitled” to them. This word is revealing from Microsoft, which declines any legal responsibility without actually saying what it will do.

More fear than harm

In practice, it would be difficult for Microsoft to configure its update servers to reject PC requests based on such detailed configuration information. This would risk trapping customers with valid installations, and it would unnecessarily anger customers who otherwise have a perfectly satisfactory experience with Windows 11. Instead, this language is a way to convince customers to trade in their old PCs for new ones, thus choosing the option that puts new revenue in the pockets of Microsoft and its third-party manufacturing partners.

This kind of confusion is not unprecedented. In the days leading up to the launch of Windows 10, Windows skeptics were convinced that Microsoft would pull the rug out from under the updates based on confusing language about “supported device life.”

One Windows expert even claimed that Microsoft would start charging Windows 10 customers for updates within two years… which ultimately turned out to be a false alarm. It’s possible, of course, that a future Windows update could cause performance and reliability issues on older PCs, but the idea of Microsoft punishing customers for following a documented procedure for rolling out the update seems highly unlikely.

Categories
Cloud computing

Cloud-native infrastructure

Cloud-native infrastructure: what requirements for what benefits? Moving into the cloud is a gradual process. In the long learning curve of cloud technologies, the “lift and shift” principle is only an intermediate phase. It consists of migrating on-premises applications or databases to virtual machines, but without modifying them, simply by copying and pasting them. By moving IT resources from one environment to another in this way, without really questioning the technology, a company does not get the full benefit of the cloud.

The next step is cloud native. An application is specifically designed for the cloud to take advantage of all its strengths in terms of scalability, high availability, resiliency or interoperability. “Unlike monolithic applications, the granular approach of the cloud native allows new functionalities to be deployed easily, thus reducing time to market,” says Mohammed Sijelmassi, Chief Technology Officer at Sopra Steria.

To further accelerate development, a company can call on various ready-to-use cloud services offered by providers. These can include authentication or monitoring services. “Why rewrite the umpteenth identity management brick when the cloud is full of very good solutions?” asks Mohammed Sijelmassi. “We might as well focus on the differentiating elements for an enterprise.

With these assets, cloud-native application development has a bright future. By 2025, Gartner estimates that more than 95 percent of new digital workloads will be deployed on cloud-native platforms, up from 30 percent in 2021. IDC, meanwhile, predicts that the share of cloud-native applications will exceed 90 percent by the same time frame.

Microservices, containers, mesh, APIs and DevOps

The environment that hosts this new generation of applications meets a number of prerequisites. The cloud native approach is inseparable from the microservices architecture. An application or service is broken down into small software bricks that can be deployed, upgraded and managed autonomously.

Microservices is all about containerization and the flexibility it brings. By encapsulating an application so that it can run on any server, software container technology allows you to ignore the underlying infrastructure. Kubernetes, an open source brick that has become essential, will manage and orchestrate container clusters.

The native cloud also uses the service mesh. This service mesh corresponds to an infrastructure layer specifically dedicated to an application. It is thus possible to see how the various components interact with each other and thus to optimize the performance of the said application more easily.

Finally, to end with the buzzwords, cloud native of course uses APIs to call third-party cloud services. By offering to quickly deploy updates to any part of the application, it is also part of the DevOps and CI/CD (continuous integration/continuous delivery) movement.

Lack of internal skills

If, on paper, cloud native ticks all the boxes, there are still a few obstacles to overcome before it becomes widespread. An OutSystems study points to a lack of in-house skills and expertise. “Thinking in terms of microservices application architecture requires a certain maturity,” says Alexandre Cozette, OutSystems’ technical director for france. “Cloud developers need to master the principles of containerization and be familiar with the offerings of hyperscalers.”

To address this talent shortage, OutSystems announced Project Neo in November, a next-generation low-code platform that allows developers to build full-stack cloud-native applications while taking care of all the technical “plumbing” behind them.

“Cloud native adds complexity,” adds Mohammed Sijelmassi. “You have to design the architecture to break down an application into microservices that are sufficiently flexible and controllable. Getting to the right level of granularity requires some skill building. If we describe services that are too large, we don’t take advantage of the scalability of the cloud.”

Sopra Steria’s CTO also mentions the cyber component. “A company agencies services that it doesn’t have 100% control over. The design of this type of architecture must be based on a ‘by design’ security approach.”

Finally, he says, you shouldn’t become “a cloud-native ayatollah” and throw out the entire existing on-premise environment. “In a hybrid approach, you have to ask yourself what you are keeping and what you are rewriting. A neobank can afford to build its information system from scratch in the cloud. A traditional bank, on the other hand, has to deal with the full weight of its legacy.

Categories
Tech info

DevOps is a corporate culture

DevOps, above all a company culture

DevOps is certainly the most famous word in IT. This approach aims to bring “devs” and “ops” closer together by facilitating communication between these two populations of the IT department, which have long ignored each other – or even looked at each other as if they were fairies.

On paper, developers and operations managers have divergent interests. The former are committed to making the information system evolve as quickly as possible, sometimes to the detriment of quality. The latter, on the other hand, seek to keep the IS stable, thus slowing down deployments.

A lack of communication that leads to misunderstandings, bugs and above all delays in putting the system into production. A situation that has become untenable at a time of digital transformation and time to market.

Born around 2007 and named by Belgian engineer Patrick Debois, DevOps has since quickly gained traction. According to the latest predictions from research firm Forrester, half of all IT teams will have moved to consolidated DevOps tool chains by the end of the year.

Meanwhile, a recent survey by Redgate Software revealed that nearly three-quarters of companies had introduced DevOps in at least some projects, up from 47 percent when the study was first published five years ago.

Breaking down silos

As we’ve seen, the DevOps movement is about breaking down silos. Developers and infrastructure managers share a number of best practices and software tools in order to smooth and speed up the release process.

To run the software factory, IT professionals use collaborative tools (Slack, Microsoft Teams, Jira), Git-like source code repositories and Continuous Integration/Continuous Deliver (CI/CD) chains. These chains are made up of different open source bricks (Jenkins, Gradle, CodeShip, Buddy) or come in the form of integrated platforms (GitLab, CloudBees).

And since DevOps is becoming inseparable from the cloud, and especially from hybrid and multicloud environments, there are also Infrastructure-as-a-Code (IaaS) solutions such as Terraform and Ansible and, of course, Kubernetes, the essential orchestrator for containerized environments.

“DevOps is first and foremost a culture”

“But, more than tools or a method, DevOps is above all a culture that must allow teams to interact with each other, to integrate the daily activity of the other,” says Olivier Félis, pre-sales engineer france and BeLux at Micro Focus. “Common sense and pragmatism allow us to harmonize processes and work in harmony.

Since the approach involves many internal changes, it advises setting up change support and making each employee aware of DevOps concepts before it is implemented. The next step is to ensure that all employees can work within this common dynamic, from developers to supervisors.

The gains are there. In the 2021 edition of its Accelerate State of DevOps Report, Google Cloud shows that the most mature teams, dubbed “elites,” continue to accelerate the pace of their software production. However, they would be less stressed, as all the steps before deployment have been validated and tests are automatically generated. Teams with a true DevOps culture would have suffered half as many cases of burnout during the Covid-19 pandemic.

Scaling up and skills shortages

“While large accounts have a good level of maturity, smaller companies are still in the start-up phase,” observes Olivier Félis. “They are sometimes lost, confusing agility with DevOps.”

While the two approaches are legitimately associated, he reminds us that “you can’t be agile without being DevOps, as DevOps contributes to the acceleration of iterations. On the other hand, you can be DevOps without being agile and do, for example, the V-cycle.

Organizations are also having difficulty generalizing the approach. For Olivier Félis, while implementing DevOps in a small area is quite accessible, scaling up can quickly become a real challenge.

A growing movement

The skills shortage is also holding back companies’ ambitions. According to a recent study conducted jointly by CodinGame and CoderPad, DevOps is among the three most sought-after skills by recruiters, along with web development and artificial intelligence and machine learning.

However, the DevOps movement is expected to grow in the coming years. The approach has even made some small ones with the emergence of derivative methods, called DevSecOps and DevFinOps, which natively integrate security or financial optimization in software projects.

When will we see DevGreenOps, which would combine digital sobriety with the approach?

Categories
Cloud computing

What is Cloud native solution?

Cloud native : who proposes what ? In the adoption of so-called cloud-native architectures, market players can be of great help by offering ready-to-use development platforms.

For example, according to IDC, by 2024, half of all enterprises will be using applications based on managed services that include cloud-native technologies.

OpenShift, Red Hat’s hybrid cloud ecosystem

As the leading provider of open source software solutions, Red Hat offers, with OpenShift, an entire development ecosystem for hybrid cloud environments. Powered by containers and the Kubernetes orchestrator, it enables the creation of native cloud applications or the modernization of existing applications.

Last April, the IBM-owned company expanded its offerings with the launch of Red Hat Application Foundations. This toolkit provides developers with pre-integrated components for data streaming or API management, as well as various frameworks. Optimized for OpenShift, it can also be used by third-party solutions.

The following month, Red Hat announced updates to several solutions in its portfolio. New features in OpenShift Pipelines and OpenShift GitOps allow for greater leverage of Git, an open source version control system that simplifies development and deployment in hybrid and multi-cloud environments. Red Hat has also evolved its local development environments, called OpenShift DevSpaces and OpenShift Local.

Tanzu, VMware’s cloud-native development platform

With Tanzu, VMware distributes a complete stack that covers the entire lifecycle of a cloud-native application, from its design to its operation within an on-premises infrastructure or in various cloud environments, including multicloud and edge.

Comprising four services on the “dev” side, and six on the “ops” side, the modular platform aims to provide all the tools for development and deployment (Tanzu Application Platform) but also for management, monitoring and securing multi-clusters infrastructures with Kubernetes as the common thread of its offer.

With Tanzu Kubernetes Grid, VMware offers its own Kubernetes runtime environment as a managed service. Administrators can additionally use Tanzu Mission Control for operations monitoring and management and Tanzu Observability to track and manage application performance.

Amazon Elastic Kubernetes Service, leading the way in “Kubernetes-as-a-Service” solutions.

Amazon Web Services (AWS), however, remains at the top of “Kubernetes-as-a-Service” solutions.

According to the latest activity report from the Cloud Native Computing Foundation (CNCF), 37 percent of European organizations use Amazon Elastic Kubernetes Service (EKS). AWS also comes out on top when it comes to serveless mode. Lambda, its serverless event computation service, was adopted by 66 percent of European companies surveyed.

With more than 200 services in its portfolio, AWS’ native cloud approach is, of course, not just about these two offerings. To showcase its various solutions, the U.S. public cloud giant has imagined, in a series of five blog posts, the journey of an e-commerce company in its quest for the Holy Grail of “hypergrowth.”

Azure Kubernetes Service and Azure Functions, the services of Microsoft Azure

In the same CNCF report, Microsoft Azure comes in a good second, behind AWS, both for Kubernetes managed service hosting, with Azure Kubernetes Service (AKS) and for the serverless approach with Azure Functions. As with AWS, it is difficult to cover all the services that contribute to a native cloud architecture.

However, we note the recent general availability of Azure Container Apps. This service runs application code packaged in any container without worrying about execution or the programming model. According to Microsoft, it should allow developers to focus on the business logic that adds value to their applications. They no longer have to manage virtual machines, the orchestrator or the underlying cloud infrastructure.

Like AWS, Microsoft Azure is also educating. In its documentation, the provider reminds us what a native cloud application is and its characteristics. Of course, it also highlights its own .NET development environment.

Google Cloud Deployment Manager, the Infrastructure-as-a-Code of Google Cloud

The latest hyperscaler, Google Cloud, is not to be outdone. As the originator of Kubernetes, the Mountain View firm is particularly legitimate to speak out on the native cloud approach. In a blog post, one of its expert architects details the five principles that govern this type of architecture, one of which is to favor… managed services.

On this side, Google Cloud’s offer is well supplied. Google Cloud Deployment Manager is an infrastructure deployment service based on an “Infrastructure-as-a-Code” approach similar to Terraform. We can also mention Google Cloud Build, a serverless integration and continuous delivery (CI/CD) platform.

To encourage the development of native cloud applications, Google Cloud plays on gamification. Developers who build serverless web applications on Google Cloud using Cloud Run (containerized applications) and Firebase (mobile applications) earn skill badges.

Categories
Cloud computing

4 major challenges of cloud native

Before an enterprise can benefit from the full benefits of a cloud-native approach in terms of scalability, availability or innovation, it must reveal a number of challenges. Here are the four main ones about cloud computing:

1. Addressing the skills in cloud based apps

A recent study by OutSystems points to the lack of in-house skills and expertise as one of the main obstacles to the widespread use of the cloud-native approach in desktop and mobile apps.

Cloud architect, tech lead, full stack developer… all of these cloud technology specialized profiles are currently under pressure in the job market. At the same time, 44% of the decision-makers interviewed in the study believe that the use of cloud native technologies is a lever for attracting and retaining talent.

The shortage is expected to ease in the coming years, however. Pablo Lopez, chief technology officer at WeScale, notes a shift in the content of initial training courses. “Engineering schools are starting to take containerization and Kubernetes cloud native technologies into account. Future classes will be trained natively in cloud platforms.”

2. Overcoming the cloud computing complexity

Moving from monolithic applications to the granular approach of cloud native through a microservices architecture requires a real increase in software skills. “Companies are going into the cloud by trial and error,” observes Pablo Lopez. “They have to maintain what they have while learning about disruptive technologies.

While cloud providers offer a very rich portfolio with a dozen new cloud services every month, the IT department must constantly monitor and evaluate the real contribution of these technologies in order to retain the best in terms of performance, security and cost.

According to the expert, two approaches are possible. “A company can set up new operational teams dedicated to cloud-native technologies that bring on board the rest of the IT department. At the risk that these will be looked at with envy.” Another possibility: opt for a global transformation. “This implies significant training efforts, but also a higher resistance to change.”

To get around this difficulty, major accounts have set up a cloud center of excellence (CCoE). This core of expertise aims to hide the complexity of cloud security projects, by popularizing and selecting a limited number of cloud services and giving trajectories to reach the Grail of the cloud native. Digital native” companies have set up SRE (Site Reliability Engineering) teams to spread best practices.

3. Controlling operational costs

No need to configure a physical server anymore, with the cloud IT teams can create a development and environment to test cloud based apps with a few clicks. This can lead to over consumption. “While there are threshold mechanisms, the ability to predict cloud costs remains complex,” says Seven Le Mesle, co-founder and president of WeScale. “They depend, among other things, on the traffic load and the auction systems set up by the providers.”

“CIOs are thinking more in terms of envelopes than fixed amounts. This disturbs financial teams used to managing a budget to the nearest dollar,” he continues. “As for the FinOps approach, there’s a lot of talk about it, but it’s still not widely implemented.” Failing that, there are solutions dedicated to cloud cost optimization such as CloudCheckr from NetApp, Beam from Nutanix or CloudHealth from VMware.

4. Understand new cybersecurity risks

Finally, the cloud-native is leading to a paradigm shift in cybersecurity. In an on-premises environment, you only need to build a strong enough wall around the applications to counter attacks, Pablo Lopez reminds us. “Cloud native breaks down this pattern. By multiplying the points of entry, we increase the attack surface. Cloud native therefore leads to rethinking security by reducing privileged accounts and limiting access to services and data to the strict minimum.

However, “by design” security is not yet natural, notes Seven Le Mesle. The IT department is adopting cloud technologies to give its developers more autonomy. However, they are not always aware of the cybersecurity issues and security teams are overwhelmed. In particular, hackers use “exploits” published on code repository platforms such as GitHub.

The DevSecOps approach should allow everyone to be responsible for security knowing that it took five years for DevOps to become mainstream. Meanwhile, Seven Le Mesle notes the emergence of a new role. That of the cyberchampion who evangelizes and disseminates best practices within IT teams.

Categories
Cloud computing

Multi-cloud world

Born in the cloud, living in a multi-cloud world You shouldn’t put all your eggs in one basket. This saying from everyday life also applies to the world of the cloud. Companies don’t want to relive in the cloud the risk of proprietary lock-in that they experienced in the on-premises software world, especially with large ERP systems.

Multicloud, which, as the name implies, consists of using multiple cloud solutions, public or private, reduces this vendor dependency. The approach has other advantages. By multiplying accounts, a company can choose, on a case-by-case basis, the provider that offers the best value for a given service (storage, development environment, computing power).

In the field of innovation, an organization will go to the most mature provider in the “as-a-service” field of machine learning, IoT (Internet of Things) or blockchain. By using several cloud service portfolios, it will be able to respond more precisely to the expectations of business managers and thus reduce the shadow IT phenomenon.

Business continuity and sovereignty

By having the ability to switch from one cloud to another in the event of an outage or performance drop, an organization also gains resiliency. Multicloud can be used as part of a business continuity or disaster recovery plan (BCP, DRP) by using a public cloud as a backup infrastructure. For multinationals, the multicloud can complement the coverage of its traditional provider in a given geography.

It also addresses issues of sovereignty. While American hyperscalers – Amazon Web Services, Microsoft Azure, Google Coud – dominate the public cloud market, they are subject to the principle of extraterritoriality specific to US law, which governs the Patriot Act and the Cloud Act.

This legal risk can lead an organization to select a national flagship – OVHcloud, 3DS Outscale, Scaleway – to host sensitive data or applications.

89% of companies worldwide are pursuing a multi-cloud strategy

For all the right reasons, a growing number of companies have adopted this approach. A recent Flexera report reveals that 89% of companies worldwide are pursuing a multi-cloud strategy. In france, nearly 66% of decision makers surveyed in an IBM study say that dependence on cloud providers is a significant barrier to improving business performance.

For all that, the move to the multicloud is not a smooth one. Despite the widespread use of open source technologies, the portability of a service from one cloud to another is hampered by proprietary adherences unique to each ecosystem.

Through various initiatives – Anthos and BigQuery Omni from Google Cloud, Azure Arc from Microsoft Azure, VMware Cloud on AWS – the public cloud market is trying to ensure the reversibility of its solutions, even if the movement remains in its infancy. At the European level, the Gaia-X community project aims to guarantee the interoperability of existing cloud services on the basis of common standards.

Cost explosion and cyber risks

Another complaint is that multicloud would encourage the explosion of cloud costs. By multiplying supplier accounts and administration consoles, a company finds it more difficult to control its budget. Providers make things even more complicated by proposing particularly complex price lists that are difficult to compare, since they are based on their own units of measurement.

Multi-cloud makes it even more important to use the FinOps approach, which aims to monitor and optimize cloud costs without cutting back on performance. An enterprise can also use a cloud broker or a Cloud Management Platform (CMP) solution from VMware, NetApp, Red Hat or Cisco. In addition to resource provisioning and orchestration functions, this type of platform allows you to control usage and reduce costs.

Finally, there is the cyber aspect. By multiplying the number of clouds, a company mechanically increases its risk exposure surface. It has to juggle between several administration consoles, grant access rights for the different accounts, and become familiar with the configuration and patching policy of each platform. This multiplication of entry points requires constant monitoring and vigilance.