Categories
Cloud computing

Outsourced backup: method and advantages

Reading time : 2 min

The outsourced backup is for a company to entrust the management of the backup and security of its data to an external provider. A solution that more and more companies are using because of the simple and fast method on which the outsourced backup is based but also because it is a source of many advantages.

outsourced backup

Outsourced backup: a simple method

Setting up a backup procedure outsourced of a company’s strategic data amounts to having a copy of this information on a remote server. The data thus transit on Internet to be stored within the servers of the provider.

In this respect, companies specializing in this niche offer their customers servers entirely and exclusively dedicated to the copy of information.

All the outsourced backups to be performed are also scheduled to be carried out every day, for example as soon as the employees of the client company finish their workday. An automatic connection is set up between the different computer stations of the employees and the Web server of the provider. The information to be saved are thus synchronized In order not to let any data pass, not to omit any information to be saved.

The company having recourse to the service of externalized safeguard thus does not have to worry about no complex technical handling A single software program is installed on the employees’ workstations to define the data to be transmitted so that they can be copied automatically at the end of each day to fully secure servers.

Benefits of using an outsourced backup service

By opting for an outsourced backup service, companies benefit from advantages of size.

  • The backup is fully automated no technical and computer manipulation to realize, the data are regularly updated
  • The storage space on the provider’s servers is particularly large This backup system has the advantage of being able to copy files and data of significant size
  • The service is scalable : according to the needs of the company, the provider can propose more powerful storage spaces
  • Data can be consulted from anywhere at any time For an effective collaboration at a distance, on PC, tablet, cell phone…
  • The provider takes care of all the operations The company’s employees can then free up valuable time to remain focused on their core business, on their daily tasks
  • Data is protected from various incidents such as fire, theft, flooding, damaged hard drives…
  • This is a permanent solution
  • The overall cost of the operation is often less than that of all the storage media traditionally put in place
  • Data is protected during transfer to the servers The provider has indeed initiated an encryption to prevent any possibility of hacking information.

Categories
Cloud computing

OVHCloud ticks the most boxes

Sovereign cloud comparison: OVHCloud ticks the most boxes

What are the criteria for a sovereign cloud? How do the major providers position themselves with respect to each? Here is an overview.

How to define today what is a sovereign cloud? JDN asked Philippe Latombe, a Modem deputy, a member of the National Assembly’s law commission and an expert on the cloud. Here is his answer: “It is a cloud located and operated by a French company. A company that has no connection with a foreign parent company, and which is therefore protected against extraterritorial legislation such as the American Cloud Act. The Cloud Act allows the US federal government to access data hosted by an American company, regardless of its location in the world (see the study by the American law firm Greenberg Traurig LLP).

“A sovereign cloud must also be backed by server and network equipment designed and assembled in france, with the main components also made in france, such as processors or memory,” adds Philippe Latombe. This is a precaution that will limit the risk of backdoors that could be used by the CIA under the FISA (Foreign Intelligence Surveillance Act). “To avoid any external interference, the supplier will finally propose a system to encrypt the customer’s data by giving him the possibility to use his own encryption keys”, adds the deputy.

Based on this definition, the JDN draws up below a comparison of the main cloud providers, French or not, present on our soil, by sifting for each of them all the criteria of sovereignty mentioned.

 
Granular encryption service*. Offering isolated from extraterritorial legislation Proprietary software platform made in france Servers and network equipment designed in france Servers assembled in france Processor made in france Secnum-Cloud
AWS X
Google Cloud X In project In project
Microsoft Azure In project In project
Oracle
Orange Flexible Cloud X
OVHCloud X X X X
Scaleway X X X
3DS Outscale X X X

* Encryption offering covering the main cloud services offered (virtual machines, storage, database services, container as a service, Kubernetes as a Service, Funtions as a Service…)

Of the 7 criteria analyzed, OVHCloud is the one that meets the most, i.e. 4. In france, Octave Klaba’s group obviously offers a legal structure that isolates its offer from offshore regulations. It designs its own servers and assembles them in its factory in Croix in the North of france. This industrial infrastructure manufactures more than 80,000 servers every year. This policy of internalization allows OVH to optimize and above all to secure its supply chain to a large extent. On the other hand, the Roubaix-based group does not build the electronic components of its machines. As a result, it remains dependent on the vagaries of this market, particularly in the critical microprocessor segment. Not to mention the back doors that can creep in.

OVHCloud has also obtained the very select SecnumCloud certification awarded by the French National Agency for Information Systems Security (Anssi). A certification voluntarily selected among the sovereignty criteria analyzed. Why was this chosen? Because it brings the recognition of the French State as to “the quality and robustness of the service, the competence of the provider, and the trust that can be given to him” (says Anssi). The fact remains that this is OVH’s private cloud service, which, unlike its public cloud offering (based on an open source foundation), is based on the proprietary American platform VMware. On the other hand, 3DS Outscale has obtained the precious sesame for its public cloud infrastructure. However, the cloud subsidiary of Dassault Systèmes has chosen the NetApp storage system and Cisco network equipment. These are also American technologies. “SecnumCloud requires us to use devices to detect third-party network traffic (from, for example, spy-oriented sniffers embedded in U.S. technologies under FISAeditor’s note),” says David Chassan, Director of Strategy at 3DS Outscale.

Towards sovereign processors?

In terms of processors, the French sovereign cloud sector could be on the rise again in the wake of the Electronique france 2030 plan. Unveiled by the government in July, it plans to inject $5 billion into semiconductors, including $800 million into the next generation of 10 nanometer processors. With the IoT as a target but also the cloud, it is part of the second project of common European interest (PIIEC). A program that includes, in addition for france, 10 billion dollars of spending targeting about fifteen R&D projects in electronics and telecoms, as well as the construction of a dozen new factories or manufacturing lines for components. The combined ambition of the PIIEC and the Electronics france 2030 plan? To increase semiconductor production capacity in france by around 90% by 2027.

“The success of the S3NS projects will depend on the way in which their services are organized and framed”

Among semiconductor champions, there is the unavoidable STMicroelectronics, but above all Soitec, which targets the edge computing segment in particular. This positioning will become increasingly important with the growing trend towards decentralized cloud computing. Among server manufacturers, 2CRSI is a key player. A technology chosen by OVHCloud to equip its Asian datacenters.

Sovereign offers “illusory

“The issue of the sovereign cloud, which raises the question of the integrity of the security of the data entrusted to providers, is an essential issue that is recognized by all the players in the market, whether American, European or French,” explains Olivier Iteanu, a lawyer at the Paris bar and an expert on digital legislation. Some American cloud providers have gone so far as to appropriate the term “sovereign cloud” and integrate it into their marketing policy. This is notably the case for Microsoft and Oracle, which have both launched so-called “sovereign” European offerings. These solutions guarantee the localization of data in the customer’s country, the attachment of support to local teams, and even isolation from the supplier’s other cloud regions (“non-sovereign”).

“Here, the promise is illusory. It goes without saying that these services are not impervious to the Cloud Act, which takes precedence over any contract. With this legislation, the US is proposing a legal tool that legalizes industrial espionage and data capture,” insists Olivier Iteanu. “If a French aircraft manufacturer had the plans for one of its future models stored on an American cloud stolen, it will be able to turn against the latter, but it will then be able to benefit from the protection of the Cloud Act.

Trusted rather than sovereign clouds

For the attorney, SecnumCloud certification may be the solution that puts everyone on the same page. In its version 3.2 released in October 2021, SecnumCloud incorporates new requirements to ensure that the provider and the data it processes cannot be subject to non-European laws. Data localization, human resources, access control, information encryption, risk management, real-time incident detection… The Anssi reference framework is very detailed, even specifying requirements for the physical security of data centers.

By seeking to distribute their cloud via French third parties, Microsoft and Google aim to obtain the famous sesame. Microsoft will use Bleu, a joint venture created by Orange and Capgemini, to market its Azure cloud in france. As for the second, it has joined forces with Thales to create a joint venture (called S3NS) under French jurisdiction. “The success of the Bleu and S3NS projects will depend on how their services are organized and framed. In both cases, the teams and the cloud infrastructures will have to be entirely isolated from those of the publisher, in addition to being attached to very distinct legal structures aimed at guaranteeing a total seal with the Cloud Act,” warns Olivier Iteanu. The Azure offering marketed by Bleu should be launched by the end of September. As for S3NS, it is already being tested by a few companies in beta. Both companies describe their future offerings as a trusted cloud, not a sovereign cloud. A model for which they are far from ticking all the boxes.

Categories
Cloud computing

Cloud Computing market

According to the December 2014 edition of PAC’s CloudIndex, the maturity of  companies with regard to the Cloud continues to grow and the adoption of its solutions has even jumped – due in particular to the prior underestimation of actual usage. As a result, 55% of  companies now say they use Cloud solutions, compared to 29% last June.

Companies are primarily using SaaS applications (54%). The IaaS offers, less democratized until now, are declared used by 46% of respondents. According to the firm, they are of particular interest to companies with less than 500 employees who use these solutions for application hosting (54%), testing (49%), and website hosting (46%). PaaS remains in the background (+6 points), mainly because it is “mainly confined to developers”.

But the Cloud is not just about solutions. Services are developing in parallel. The French firm estimates the value of services (consulting, integration …) marketed in 2013 at 1.2 billion dollars. And these expenses should grow by an average of 39% per year by 2018. In total, the French Cloud market should reach 5 billion dollars in late 2014 and exceed 7 billion in 2018.

The quest for agility – According to CloudIndex, “the need for flexibility and the desire to reduce costs are the main reasons for moving to the Cloud (66%), ahead of improving time to market (60%) and developing innovative products, solutions or approaches (59%)”

30% of companies are now formalizing real Cloud strategies: a result that would tend to demonstrate that “organizations are increasingly using the Cloud in an organized and strategic way rather than opportunistically.” This is also true for SaaS, which is no longer confined to less strategic areas. Nearly “eight out of ten organizations that use SaaS consider at least one of their SaaS applications to be strategic to their business,” according to the barometer, which “confirms that SaaS is not just a stopgap measure or an unimportant add-on.

Security – There are many reasons not to use the cloud, but the main one remains the same and far ahead of the others: security. These fears have become even more pronounced and are considered important by nearly two-thirds of respondents, compared to less than 50% six months earlier. However, these fears are often unfounded, according to PAC.

For the firm, this feeling of insecurity is “regularly fueled by high-profile operational incidents, hacker attacks, or even international espionage cases, such as that of the NSA.

Public cloud – For PAC, the need for proximity expressed by companies is reflected in the search for local service providers. “This desire to deal with suppliers is clearly illustrated by our surveys. The criterion of the location of the datacenters is on the rise,

The importance of proximity for users is more than tangible. “The firm assures us that this expectation includes both Cloud providers and service providers. Local roots are important, both in the public and private clouds.

Categories
Cloud computing

Cloud business model set for big changes, says VMware

The cloud business model will see big changes, according to VMware Enterprise blockchain may still be in experimental mode, but it could soon change the way applications and systems are designed, moving from an architecture managed by individual organizations to architectures in which applications and data are shared and secured across multiple entities – in essence, a truly decentralized form of computing.

There are many cloud service providers, but even more data centers. Do all these data centers, with countless amounts of underutilized computing power, represent an untapped pool of cloud computing power that could flatten the cloud ecosystem?

That’s according to Kit Colbert, CTO at VMware, who sees a much more decentralized future than currently exists. I recently had the opportunity to speak with him at VMware’s Explore conference in San Francisco last week, where he described the factors that are opening up enterprise computing.

Increasingly decentralized environments

One emerging scenario is applications built around blockchain or distributed ledger technologies, with their ability to enable trust among multiple participants, Kit Colbert relates. “Enterprise blockchain is very well aligned with our focus.”

Today, the focus is on distributed applications that are built and run with cloud-native or Kubernetes-based building blocks. However, the focus is more on decentralized environments today, he noted. Distributed architectures are supported by a single entity, but decentralized architectures are supported by multiple organizations.

Although both architectures support multiple application instances and a shared database, “the big difference is that in a decentralized architecture, different companies will be running some of those instances, instead of being run by a single organization,” he explained.

That means those organizations “probably won’t trust each other completely,” Kit Colbert continued. “That’s where blockchain comes in, to support those kinds of use cases.”

“The Airbnb of computing capacity,” according to Kit Colbert

While decentralized blockchain-based systems still represent a small fraction of VMware’s offerings, Kit Colbert expects that to grow as the technology develops.

Cloud computing itself is a heterogeneous mix, and will remain so. While public cloud computing is a big part of the future for many IT plans, on-premises environments still have their place, Kit Colbert believes.

“Even if a company was born in the cloud or moves to the cloud, we often find that they bring things back to the cloud. Often, for reasons of cost, compliance, security, locality, or sovereignty, it’s better to keep things in-house. Putting everything in the public cloud is not the right solution, keeping everything on premises is not the right solution. Instead, to be smart, you have to say, OK, what are the requirements of the application, and where is the best place to meet all those requirements?”

From a data center perspective, the technologies are now in place to support grid-like cloud resources, using not only cloud provider resources, but also the capabilities of shared private data centers offered in an open spot market – a sort of Airbnb of computing capacity. This includes the ability “to run a virtual machine that can be protected by an administrator,” says Kit Colbert. “We can apply that cryptographically, which we couldn’t do a few years ago, thanks to processor core changes.”

VMware once piloted a “cloud exchange” in which unused capacity in corporate data centers could be sold on an open market. The project was a learning experience for the company and helped identify potential problems,” says Kit Colbert.

Conducted among VMware’s cloud providers and platform partners, the main issue encountered during the pilot was security – moving data to unknown locations. “We can’t write unencrypted data to a hard drive that belongs to another customer,” says Kit Colbert. “That’s a red line – we have to have encryption. We also have to have a way to prevent the operator from accessing the virtual machine or its data, either at runtime or at rest.”

The CTO role is evolving

Providing security also introduces “liability issues for customer operators,” he continues. “They’re not going to want to sign indemnification clauses, and a whole bunch of legal and other things that we might get caught up on as well.”

Kit Colbert also discussed the evolving role of his profession, the chief technology officer, which often overlaps with chief information officers and chief digital officers. “The CTO is one of the least defined roles in the industry,” he believes. “It can be a vice president of engineering, a super sales engineer, an evangelist or a product manager… or it can be more of an individual contributor, more of an influencer, an architect type.”

Kit Colbert oversees innovation, ESG, and the core platforms and services that support the vendor’s business units. “In addition, I provide the overall technical strategy for the company: this is where we should be going as a company, and this is the outline of what we should be doing as a company.”

Categories
Cloud computing

Multi-cloud architectures is a new deal in cybersecurity

Multi-cloud architectures: a new deal in cybersecurity Over the past few years, the cloud revolution has profoundly transformed the IT business models of organizations across all industries. A majority of organizations now use multiple applications and cloud hosting services, integrated within a single information architecture.

This “multi-cloud” model has become popular due to its many operational advantages, but it is not without its many questions that need to be anticipated in order to reap its full benefits in complete security.

Common problems

The use of multi-cloud operates in different ways depending on the data management policies of each organization. While it is particularly common to use separate vendors for infrastructure, platform and application needs, many organizations are now using multiple Iaas, PaaS and SaaS services simultaneously.

This is a choice that corresponds to the desire to prevent too strong a dependence on one supplier, but which is explained above all by the technical adaptability that it allows. By opting for dedicated services for each need and suppliers specialized in each task, network administrators can design IT architectures that are perfectly tuned to business needs and always optimally sized. Financially, it’s also a way to take advantage of the fierce competition among cloud providers to get the best prices available for each service.

These undoubted operational advantages, however, pose many challenges, not the least of which is the considerable complexity of cybersecurity efforts. In a context of a significant increase in cyber attacks, linked in particular to geopolitical tensions and the new opportunities provided by the digitalization of companies, the accumulation of cloud services is also synonymous with an increase in potential security breaches. The interconnection between cloud services in multicloud architectures can lead to the uncontrolled circulation of sensitive data and personal data, the processing of which is now strictly regulated.

A comprehensive review of security policies

For organizations that are aware of these issues, the gradual adoption of the multi-cloud model must be accompanied by a regular review of security and data handling policies. As cloud solutions continue to evolve, the technical, legal and regulatory compliance of all services must be re-evaluated at regular intervals, taking into account the uses and criticality of the data exchanged.

Particular attention must be paid to the security of APIs due to the significant differences in maturity between providers in this area. Despite the high complexity of the data circulation pattern in multi-cloud environments, the objective must be to achieve a unified vision of the application ecosystem in order to deduce an appropriate security plan. A task in which the close cooperation of the company’s partners and its cloud providers is essential.

A lever for optimization

This process of constant reassessment of the infrastructure and its security should not be seen as a simple precautionary measure. Beyond cybersecurity, the superposition of sometimes redundant tools and the increasing complexity of IT architectures can sometimes put a strain on the productivity of both IT departments and business teams.

Because it provides a better understanding of the company’s application environment, rigorous management of multi-cloud issues can also be the starting point for optimizing security processes and business processes.

Categories
Cloud computing

Cloud Migration and Enterprise Architecture go hand in hand

Cloud Migration and Enterprise Architecture go hand in hand The “cloud only” philosophy seems to be gaining support in a growing number of organizations. But beware: a total migration of the entire information system (IS) to the cloud without any distinction is rarely a good choice. A decision that would make any enterprise architect “jump”, for whom such an option is tantamount to trivializing all IS applications and erasing their intrinsic differences: business value, lifecycle, complexity and, of course, the sensitivity of the data processed.

And this is all the more true given that cloud projects are far from neutral in budgetary terms. Choosing the cloud is not something to be taken lightly: it can make sense for some applications, and be totally useless, or even counter-productive, for others. The study must be carried out application by application: a role for the enterprise architect.

1. Manage cloud migration priorities

The usefulness to business teams – in other words, business criticality – is naturally the first thing to measure for a cloud migration: why build a project for an application that is rarely used or brings little value to the business? On the contrary, a core business application gains in durability, efficiency, flexibility and scalability by migrating to the cloud. This is the promise of this option: spending less time on infrastructure issues and more time on functional changes.

This value is also measured over time: a core business application that is at the end of its life cycle, despite its value, will not be migrated, but replaced by a “cloud native” application. Conversely, an application that is still immature or lacking in stability (functional or technical) should not be a priority for a cloud migration. In this context, the enterprise architect will be able to finely evaluate the life cycle of each application to decide whether it is appropriate to launch a migration project.

2. The complexity of the application, determining the project load

If a migration project is far from neutral for the organization, it is even less so the more complex the application concerned. This complexity is measured first and foremost in terms of the components of the application itself: the technology used for its development, the level of specific developments or customized features, etc.

Measuring these elements allows us to evaluate the workload of both the IT department and the business teams, who will be required to contribute to the migration, according to the possible types of strategy: “rehosting” (rehosting or switching), “replatforming” (to a native cloud platform), “re-purchasing” (new product), “refactoring” (or redesigning the architecture). On the other hand, it can also be decided to decommission the application (“retire”) or to keep it as is (“retain”). Together, these six strategies (“6Rs”) make up the possible options for cloud migration.

But the complexity of an application can also be assessed by its position in the information system and its interconnections with other elements of the IS. For example, a CRM or an ERP is inherently complex, because it interfaces with many other applications, and even with the company’s ecosystem. This also makes them more difficult to migrate. Here again, the work of the enterprise architect in terms of IS mapping is a valuable tool for evaluating migration possibilities.

3. Data sensitivity: managing risk and compliance

Finally, it is also the data that will have to guide the organization’s choices when it comes to cloud migration. Because the process is far from trivial in terms of governance, risk management and compliance (GRC). In some cases, this is simply not possible, at least not for public clouds: sensitive business sectors, operators of vital importance (OIV) or operators of essential services (OSE), public administrations, etc.

When it is possible to move data to the cloud, whatever the sector concerned, the enterprise architect must ensure that security aspects are managed effectively: migrating applications, and therefore data, does not mean delegating the entire security of the IS to a third party. On the contrary, it is even a matter of redoubling vigilance.

Categories
Cloud computing

How to optimize the cloud?

Cloud : how to optimize its use to limit its impact ? When we think of climate change, we tend to imagine beaches covered in garbage, black smoke from factories, or the massive deforestation of the Amazon. But some of the damage to the planet is much less visible, like the damage done by the digital sector. The digital sector alone accounts for nearly 4% of global carbon emissions, twice as much as global civil aviation. And given our growing and increasingly energy-intensive digital uses, this figure is expected to jump by 60% by 2040.

If, until now, we have chosen to ignore the emissions generated by IT by focusing on its benefits, such as the reduction in the amount of paper used within companies or the ability to work and meet remotely, we must react.

So any company that wants to be environmentally conscious today needs to make sure that its digital uses are more sustainable. And one of the answers is the cloud.

Cloud players already on the move toward responsible digital

While the use of the cloud is indeed based on physical infrastructures such as datacenters, these are becoming increasingly energy efficient. Indeed, servers are becoming more and more efficient and consume less energy. There are many reasons for this, but one in particular stands out: the location of data centers. Indeed, countries like Sweden, with hydroelectricity, or france, with nuclear power, produce much lower carbon energy than most of their European neighbors.

Major providers such as Google Cloud Platform, Amazon Web Services and Microsoft Azure have also all set clear renewable energy targets, mainly through so-called Power Purchase Agreements (PPAs). These power purchase agreements dedicated to companies, SMEs and local authorities provide access to green and reliable energy for many years while promoting the energy transition.

However, even if we are indeed moving in the right direction, it is often difficult to compare these promises with the rigorous GHG protocols of audits, and the various reports that appear on carbon emissions. How can we make sense of this?

How can we optimize the cloud?

There are several ways that companies and cloud providers can work together to reduce their emissions and get the most out of the technology. First, companies can work directly with their providers, or even specialized agencies, to conduct an audit and estimate their current emissions. The important thing is to get a complete picture of how much carbon is being generated by the company’s activities.

Second, there are two ways to optimize the use of cloud resources. The first is to go back to basics by changing the code itself. It’s important to keep in mind that while optimizing a few lines of code may seem trivial, the overall effect can be very powerful, especially when we’re talking about hundreds or thousands of websites for some multinationals. The other way is to optimize the way the cloud deployment is done. With certain operational optimizations, such as continuous deployment, it is possible to use fewer resources for the same application.

Finally, the cloud also offers the ability to move data and services anywhere there is a suitable datacenter. So when launching a new project, companies should be able to choose the datacenter located on the network with the lowest carbon intensity. For example, a data center in Sweden that uses renewable energy can emit up to 10 times less CO2 per kWh than one powered by coal-fired electricity in Germany.

Ultimately, the crux of optimizing a company’s digital impact lies in transparency, and especially access to information. However, while cloud providers are beginning to provide some carbon emissions data, these figures are generally based on the global market and do not include location information. The revolution is underway, but much progress remains to be made.

Categories
Cloud computing

Cloud-native infrastructure

Cloud-native infrastructure: what requirements for what benefits? Moving into the cloud is a gradual process. In the long learning curve of cloud technologies, the “lift and shift” principle is only an intermediate phase. It consists of migrating on-premises applications or databases to virtual machines, but without modifying them, simply by copying and pasting them. By moving IT resources from one environment to another in this way, without really questioning the technology, a company does not get the full benefit of the cloud.

The next step is cloud native. An application is specifically designed for the cloud to take advantage of all its strengths in terms of scalability, high availability, resiliency or interoperability. “Unlike monolithic applications, the granular approach of the cloud native allows new functionalities to be deployed easily, thus reducing time to market,” says Mohammed Sijelmassi, Chief Technology Officer at Sopra Steria.

To further accelerate development, a company can call on various ready-to-use cloud services offered by providers. These can include authentication or monitoring services. “Why rewrite the umpteenth identity management brick when the cloud is full of very good solutions?” asks Mohammed Sijelmassi. “We might as well focus on the differentiating elements for an enterprise.

With these assets, cloud-native application development has a bright future. By 2025, Gartner estimates that more than 95 percent of new digital workloads will be deployed on cloud-native platforms, up from 30 percent in 2021. IDC, meanwhile, predicts that the share of cloud-native applications will exceed 90 percent by the same time frame.

Microservices, containers, mesh, APIs and DevOps

The environment that hosts this new generation of applications meets a number of prerequisites. The cloud native approach is inseparable from the microservices architecture. An application or service is broken down into small software bricks that can be deployed, upgraded and managed autonomously.

Microservices is all about containerization and the flexibility it brings. By encapsulating an application so that it can run on any server, software container technology allows you to ignore the underlying infrastructure. Kubernetes, an open source brick that has become essential, will manage and orchestrate container clusters.

The native cloud also uses the service mesh. This service mesh corresponds to an infrastructure layer specifically dedicated to an application. It is thus possible to see how the various components interact with each other and thus to optimize the performance of the said application more easily.

Finally, to end with the buzzwords, cloud native of course uses APIs to call third-party cloud services. By offering to quickly deploy updates to any part of the application, it is also part of the DevOps and CI/CD (continuous integration/continuous delivery) movement.

Lack of internal skills

If, on paper, cloud native ticks all the boxes, there are still a few obstacles to overcome before it becomes widespread. An OutSystems study points to a lack of in-house skills and expertise. “Thinking in terms of microservices application architecture requires a certain maturity,” says Alexandre Cozette, OutSystems’ technical director for france. “Cloud developers need to master the principles of containerization and be familiar with the offerings of hyperscalers.”

To address this talent shortage, OutSystems announced Project Neo in November, a next-generation low-code platform that allows developers to build full-stack cloud-native applications while taking care of all the technical “plumbing” behind them.

“Cloud native adds complexity,” adds Mohammed Sijelmassi. “You have to design the architecture to break down an application into microservices that are sufficiently flexible and controllable. Getting to the right level of granularity requires some skill building. If we describe services that are too large, we don’t take advantage of the scalability of the cloud.”

Sopra Steria’s CTO also mentions the cyber component. “A company agencies services that it doesn’t have 100% control over. The design of this type of architecture must be based on a ‘by design’ security approach.”

Finally, he says, you shouldn’t become “a cloud-native ayatollah” and throw out the entire existing on-premise environment. “In a hybrid approach, you have to ask yourself what you are keeping and what you are rewriting. A neobank can afford to build its information system from scratch in the cloud. A traditional bank, on the other hand, has to deal with the full weight of its legacy.

Categories
Cloud computing

What is Cloud native solution?

Cloud native : who proposes what ? In the adoption of so-called cloud-native architectures, market players can be of great help by offering ready-to-use development platforms.

For example, according to IDC, by 2024, half of all enterprises will be using applications based on managed services that include cloud-native technologies.

OpenShift, Red Hat’s hybrid cloud ecosystem

As the leading provider of open source software solutions, Red Hat offers, with OpenShift, an entire development ecosystem for hybrid cloud environments. Powered by containers and the Kubernetes orchestrator, it enables the creation of native cloud applications or the modernization of existing applications.

Last April, the IBM-owned company expanded its offerings with the launch of Red Hat Application Foundations. This toolkit provides developers with pre-integrated components for data streaming or API management, as well as various frameworks. Optimized for OpenShift, it can also be used by third-party solutions.

The following month, Red Hat announced updates to several solutions in its portfolio. New features in OpenShift Pipelines and OpenShift GitOps allow for greater leverage of Git, an open source version control system that simplifies development and deployment in hybrid and multi-cloud environments. Red Hat has also evolved its local development environments, called OpenShift DevSpaces and OpenShift Local.

Tanzu, VMware’s cloud-native development platform

With Tanzu, VMware distributes a complete stack that covers the entire lifecycle of a cloud-native application, from its design to its operation within an on-premises infrastructure or in various cloud environments, including multicloud and edge.

Comprising four services on the “dev” side, and six on the “ops” side, the modular platform aims to provide all the tools for development and deployment (Tanzu Application Platform) but also for management, monitoring and securing multi-clusters infrastructures with Kubernetes as the common thread of its offer.

With Tanzu Kubernetes Grid, VMware offers its own Kubernetes runtime environment as a managed service. Administrators can additionally use Tanzu Mission Control for operations monitoring and management and Tanzu Observability to track and manage application performance.

Amazon Elastic Kubernetes Service, leading the way in “Kubernetes-as-a-Service” solutions.

Amazon Web Services (AWS), however, remains at the top of “Kubernetes-as-a-Service” solutions.

According to the latest activity report from the Cloud Native Computing Foundation (CNCF), 37 percent of European organizations use Amazon Elastic Kubernetes Service (EKS). AWS also comes out on top when it comes to serveless mode. Lambda, its serverless event computation service, was adopted by 66 percent of European companies surveyed.

With more than 200 services in its portfolio, AWS’ native cloud approach is, of course, not just about these two offerings. To showcase its various solutions, the U.S. public cloud giant has imagined, in a series of five blog posts, the journey of an e-commerce company in its quest for the Holy Grail of “hypergrowth.”

Azure Kubernetes Service and Azure Functions, the services of Microsoft Azure

In the same CNCF report, Microsoft Azure comes in a good second, behind AWS, both for Kubernetes managed service hosting, with Azure Kubernetes Service (AKS) and for the serverless approach with Azure Functions. As with AWS, it is difficult to cover all the services that contribute to a native cloud architecture.

However, we note the recent general availability of Azure Container Apps. This service runs application code packaged in any container without worrying about execution or the programming model. According to Microsoft, it should allow developers to focus on the business logic that adds value to their applications. They no longer have to manage virtual machines, the orchestrator or the underlying cloud infrastructure.

Like AWS, Microsoft Azure is also educating. In its documentation, the provider reminds us what a native cloud application is and its characteristics. Of course, it also highlights its own .NET development environment.

Google Cloud Deployment Manager, the Infrastructure-as-a-Code of Google Cloud

The latest hyperscaler, Google Cloud, is not to be outdone. As the originator of Kubernetes, the Mountain View firm is particularly legitimate to speak out on the native cloud approach. In a blog post, one of its expert architects details the five principles that govern this type of architecture, one of which is to favor… managed services.

On this side, Google Cloud’s offer is well supplied. Google Cloud Deployment Manager is an infrastructure deployment service based on an “Infrastructure-as-a-Code” approach similar to Terraform. We can also mention Google Cloud Build, a serverless integration and continuous delivery (CI/CD) platform.

To encourage the development of native cloud applications, Google Cloud plays on gamification. Developers who build serverless web applications on Google Cloud using Cloud Run (containerized applications) and Firebase (mobile applications) earn skill badges.

Categories
Cloud computing

4 major challenges of cloud native

Before an enterprise can benefit from the full benefits of a cloud-native approach in terms of scalability, availability or innovation, it must reveal a number of challenges. Here are the four main ones about cloud computing:

1. Addressing the skills in cloud based apps

A recent study by OutSystems points to the lack of in-house skills and expertise as one of the main obstacles to the widespread use of the cloud-native approach in desktop and mobile apps.

Cloud architect, tech lead, full stack developer… all of these cloud technology specialized profiles are currently under pressure in the job market. At the same time, 44% of the decision-makers interviewed in the study believe that the use of cloud native technologies is a lever for attracting and retaining talent.

The shortage is expected to ease in the coming years, however. Pablo Lopez, chief technology officer at WeScale, notes a shift in the content of initial training courses. “Engineering schools are starting to take containerization and Kubernetes cloud native technologies into account. Future classes will be trained natively in cloud platforms.”

2. Overcoming the cloud computing complexity

Moving from monolithic applications to the granular approach of cloud native through a microservices architecture requires a real increase in software skills. “Companies are going into the cloud by trial and error,” observes Pablo Lopez. “They have to maintain what they have while learning about disruptive technologies.

While cloud providers offer a very rich portfolio with a dozen new cloud services every month, the IT department must constantly monitor and evaluate the real contribution of these technologies in order to retain the best in terms of performance, security and cost.

According to the expert, two approaches are possible. “A company can set up new operational teams dedicated to cloud-native technologies that bring on board the rest of the IT department. At the risk that these will be looked at with envy.” Another possibility: opt for a global transformation. “This implies significant training efforts, but also a higher resistance to change.”

To get around this difficulty, major accounts have set up a cloud center of excellence (CCoE). This core of expertise aims to hide the complexity of cloud security projects, by popularizing and selecting a limited number of cloud services and giving trajectories to reach the Grail of the cloud native. Digital native” companies have set up SRE (Site Reliability Engineering) teams to spread best practices.

3. Controlling operational costs

No need to configure a physical server anymore, with the cloud IT teams can create a development and environment to test cloud based apps with a few clicks. This can lead to over consumption. “While there are threshold mechanisms, the ability to predict cloud costs remains complex,” says Seven Le Mesle, co-founder and president of WeScale. “They depend, among other things, on the traffic load and the auction systems set up by the providers.”

“CIOs are thinking more in terms of envelopes than fixed amounts. This disturbs financial teams used to managing a budget to the nearest dollar,” he continues. “As for the FinOps approach, there’s a lot of talk about it, but it’s still not widely implemented.” Failing that, there are solutions dedicated to cloud cost optimization such as CloudCheckr from NetApp, Beam from Nutanix or CloudHealth from VMware.

4. Understand new cybersecurity risks

Finally, the cloud-native is leading to a paradigm shift in cybersecurity. In an on-premises environment, you only need to build a strong enough wall around the applications to counter attacks, Pablo Lopez reminds us. “Cloud native breaks down this pattern. By multiplying the points of entry, we increase the attack surface. Cloud native therefore leads to rethinking security by reducing privileged accounts and limiting access to services and data to the strict minimum.

However, “by design” security is not yet natural, notes Seven Le Mesle. The IT department is adopting cloud technologies to give its developers more autonomy. However, they are not always aware of the cybersecurity issues and security teams are overwhelmed. In particular, hackers use “exploits” published on code repository platforms such as GitHub.

The DevSecOps approach should allow everyone to be responsible for security knowing that it took five years for DevOps to become mainstream. Meanwhile, Seven Le Mesle notes the emergence of a new role. That of the cyberchampion who evangelizes and disseminates best practices within IT teams.