Categories
Tech info

From planning to Scrum retrospective

Sprint: from planning to Scrum retrospective

Conceptualized by the Scrum agile framework, a sprint designates a project cycle during which a certain number of tasks will follow one another in order to achieve a new phase of development of a product.

As its name indicates, an agile sprint is a sequential phase in the development of a product. By sprint we mean short iterations that break down an often complex development process to make it simpler and easier to readapt and improve according to the results of intermediate evaluations (see diagram below).

In the logic of agile methods, the objective is to start small and then improve the first version of a product by small iterations. This avoids taking too much risk. We get out of the tunnel effect of V-cycle projects that are cut into successive phases of analysis, specification, design, coding and testing. These projects are characterized by a single delivery at the end of the process, with no intermediate return visits to the business users. As a result, the product may no longer meet the needs of the field, which may have changed in the meantime.

Working in successive development sprints has several advantages. First, this process offers better control of the added value and quality of the final product or service. The iterations allow us to rectify the situation at any time according to customer feedback. As the first product is launched quickly, the return on investment is also often faster.

In the end, the sprint mode contributes to increasing customer satisfaction, whether they are internal users of the product or final customers. The latter feels taken into account throughout the development process.

As a general rule, the duration of a sprint varies between one and four weeks. The duration of each cycle will depend on the tasks defined as priorities and the time deemed necessary by the members of the project team to complete them. A sprint aims to achieve a single, specific product development objective.

If the notion of a sprint is well known to teams of developers, it is because it is the cornerstone of Scrum, the most widely used agile framework today. Hence the term “Scrum sprint”. In this method, the time frame of a sprint is determined in consultation with the members of the project team.

But Scrum is not the only agile method to rely on short development iterations. Other agile methods such as Extreme programming, Feature-driven development or Crystal Clear also work with rapid development cycles equivalent to sprints.

The four steps of the agile sprint

1. Sprint planning

The sprint planning is the first step of a sprint. It is a fairly codified event during which the development cycle is organized and the objectives to be achieved are clearly stated. The information relating to the development process must be known by all the members in order to facilitate their communication.

2. The daily meeting

The daily scrum meeting is an intermediate meeting that takes place during the sprint. It brings together the members of the development team. Objective: to update the plan to reach the objective according to what has been done.

Not to exceed 15 minutes, the daily scrum meeting is also an opportunity to discuss difficulties. If a debate is launched on a blocking point, it is then recommended to plan a meeting dedicated to the question and limited to the people concerned.

3. The sprint review

At the end of each sprint, a sprint review is organized so that the development team can present the increments brought to the product under development.

The meeting is an opportunity to review the progress of the product. Is it still in phase with the business needs? Is it necessary to make any adaptations? During the sprint review, the scope of the next sprint is also discussed.

4 The sprint retrospective

The sprint retrospective takes place after each sprint review. Business users are generally not invited. It provides the development team with a space for exchanging lessons learned during the sprint, and for working on ways to improve processes and tools. It is also an opportunity to review the relationships between team members and any problems that may have been encountered.

In agile language, the sprint retrospective is part of the principle of continuous improvement. The objective is for the next sprint to be more efficient than the previous one and so on. It is an empirical method, i.e. based on experience and self-learning.

In a sprint, the product owner is the guarantor of the product vision. He is responsible for feeding the project backlog with business items to be realized. As for the Scrum master, he accompanies the dev team in the adoption and implementation of sprints and the development of application functions corresponding to the business items. Achmad Fahmi Rosyad / 123RF

The sprint backlog is not addressed by the Scrum framework. It is nevertheless widely used, even if it is not recommended because it is rather heavy to set up. It consists of gathering all the user stories (i.e. functional requests from business users) that the development team has committed to completing during a sprint. The progress of these requests will be represented on a kanban board (or scrum board). Each team member will thus have a common vision of the current sprint.

There are several types of user stories: development of functionalities, technical environments, bug fixes, etc. These are the finest units of work. They are always described from the point of view of the end user. They therefore relate the developer’s unit of work to the added value he or she will provide in the end.

Where the agile sprint refers to product development cycles, the design sprint refers to the upstream creation process. Inspired by design thinking, its purpose is to scrutinize a maximum number of ideas in a team. Generally lasting five days, it allows to quickly validate a product or service concept. The goal is to come up with one or two prototypes that will then be tested on real users on the last day.

Capitalizing on collective intelligence, sprint design aims to respond quickly to a business problem by defining a clear and proven product policy, accompanied by a development roadmap. Ultimately, the logic is obviously to accelerate the time to market while reducing the commercial risk. The main fundamentals of the design sprint: multidisciplinary team, unity of time and place, rapid prototyping and testing in real conditions.

Categories
Tech info

Scrum meaning and guide

Scrum : What is it ? Definition, guide...

Overview of the agile project management framwork Scrum, and its main pillars and values: Sprint scrum, scrum master, product owner, daily scrum meeting, scrum board…

What is the Scrum method?

Scrum is the most widely used agile framework. Like other agile methods, Scrum is a project management approach that makes the customer (or user) the main driver of the team in charge of developments. Historically, it is mainly implemented in the IT domain, and in the application development domain in particular. However, it is also used more and more in other areas of product engineering.

The term “scrum” means “scrimmage” and is openly inspired by rugby, a sport that requires a close-knit team moving in the same direction. In the Scrum Method, a “scrum” is a sprint. This means a development phase of one to four weeks that aims to focus the project team on a limited part of the product or service to be delivered. Typically, it will be a matter of concretizing some functionalities in the case of an application. At the end of each sprint, a sprint review is organized to review the progress of the project with the members of the project team, to examine the possible adaptations to be made, and to identify the objectives of the next sprint (see the infographic below).

What is the main contribution of Scrum?

Like other agile methods, the main advantage of Scrum is that it quickly leads to a first iteration of a product or service that can be used in the field. Validated progressively by the customer, the following sprints allow to enrich this first base. Result: project management becomes a much more productive process.

Gone is the tunnel effect of V-cycle projects that start with the analysis of requirements and then move on to specification, design, coding and testing. Projects that are characterized by a single delivery at the end of the process that may no longer fit with the business issues that will have evolved in the meantime.

What is a scrum sprint?

The sprint is the centerpiece of the Scrum agile method. Hence the commonly used name of sprint scrum. Sprints are short development iterations designed to create a product or service in an incremental way.

What are the 3 pillars of Scrum?

Scrum determines a framework to facilitate the rapid and efficient implementation of a development project. To successfully apply this framework, Scrum recommends three fundamental pillars :

  • Transparency. It aims to ensure that the stakeholders (project team, management and users) share a common language and benefit from all the information necessary to understand the project.
  • Inspection. The purpose of inspection is to check, through regular evaluations, that the development is still in line with the customer’s requirements and that it does not deviate from them.
  • Adaptation. A concept that lives up to its name. Its objective? To correct the project’s trajectory if deviations from the results to be achieved are detected during the inspection phase.
Scrum recommends breaking a project into intermediate iterations. These are called sprints. 123RF / Taisiya Zabelava

What are the five values of Scrum?

To these three pillars, the Scrum infrastructure adds five values aimed at making the work effective and the collaboration between the actors in presence fluid and in line with the objectives to be achieved:

  • Focus. The project team must be fully focused on the development to be achieved.
  • Openness. The project team as well as the management must be open to the Scrum way of working, in particular to interpersonal communication to move forward and solve problems together.
  • Respect. All stakeholders (project team members, management and customers) must show mutual respect.
  • Courage. Finally, the project team must have the courage to meet the challenges it will face independently.
  • Commitment. This is a value that also contributes to the success of the process. Scrum team members must be personally committed to achieving the goals of each sprint.

Focus, openness, respect, courage and commitment form the acronym Force which illustrates the purpose sought through these four core values by the Scrum agile method.

What are the 6 principles of Scrum?

In addition to its three pillars and these five values, Scrum puts forward five operational principles:

  • Empirical process control,
  • Autonomy and self-organization of the team.
  • Collaboration.
  • Prioritization or value-based prioritization.
  • Due dates,
  • Iterative development.

What is the role of the scrum master?

Scrum recommends appointing what it calls a scrum master. His role is to guarantee the implementation of the agile framework and to manage the four stages of a scrum sprint: planning, daily meeting, sprint review and sprint retrospective. The scrum master is a central element for the smooth running of the project team and is also the guarantor of the fluidity of exchanges and the productivity of work. As such, he/she identifies blocking points and leads brainstorming sessions to identify solutions. Finally, he/she writes the burndown chart (BDC) that describes the volume of tasks remaining to be completed on the vertical axis and the projected timing on the horizontal axis.

Faced with these challenges, a scrum master is expected to master the Scrum infrastructure. He or she must also demonstrate pedagogy and opt for a participative management style based on coaching. From this point of view, the role of srcum master will most often be assigned to the project manager.

What is the salary of a scrum master?

The salary of a scrum master is higher than that of an average IT project manager. Indeed, a scrum master is expected to combine the skills of a project manager with those of a coach and team leader. All this combined with an excellent knowledge of Scrum.

The scrum master can have acquired the mastery of the Scrum method by practice or by following a training leading to a scrum master certification.

What is the role of the product owner?

Alongside the scrum master, Scrum advises to appoint a product owner. His mission? Representing the customer within the project team, he is the guarantor of the product vision. He is responsible for feeding the project backlog with items or business functionalities to be implemented, with detailed specifications for each one. In Scrum language, these items are called user stories. They describe the customer’s needs in simple language that can be understood by all stakeholders.

Within the backlog, the product owner prioritizes the user stories according to four criteria: the business value introduced, the technical and business knowledge required for implementation (need for training?), the effort to be produced by the project team, and the risks, i.e. the associated constraints that may generate imponderables (technical and business prerequisites, solicitation of a supplier, etc.)

The product owner’s mission is therefore to integrate both the users’ requests and the technical constraints linked to the implementation. At each stage of the project, or sprint, he is responsible for presenting the work done to the customer. He/she analyses the feedback with the project team to ensure that the product or service developed corresponds to the client’s expectations and that it remains within the budget. To facilitate user feedback, Scrum recommends setting up user tests at the end of each sprint.

What is a daily scrum meeting or daily scrum?

The daily scrum is a meeting that takes place during a sprint. It allows each member of the project team to review the tasks completed the day before and those to be completed during the day. It is organized in front of the scrum board, which takes stock of the current sprint.

Not to exceed 15 minutes, the daily scrum meeting is also an opportunity to discuss blocking points and possible solutions to resolve them. If a debate is launched on a thorny issue, Scrum advises to plan a meeting dedicated to the subject and limited to the people concerned.

What is a scrum board?

The scrum board (or scrum task board) is a board inspired by the Kanban method. It allows you to follow the progress of tasks within the current Scrum sprint.

Most often deployed on a whiteboard, the scrum task board is divided into at least three columns: tasks to do, tasks in progress and completed tasks. Squares of adhesive paper representing these tasks will be moved from one column to another as the sprint progresses. Depending on the needs, intermediate columns can be added (test, recipe…).

Originally Scrum, Ken Schwaber and Jeff Sutherland are the authors of the Scrum Guide which lays the foundations of the agile method of the same name. Foundations, pillars, values, roles, meetings… It reviews the different concepts of the framework.

Independent from any software publisher or service provider, the Scrum Guide is available on the web. It can also be downloaded from Ken Schwaber and Jeff Sutherland’s website (Scrum.org):

Originally, it was Ken Schwaber who laid the foundations of what would become the Scrum method during a conference in 1995. He then detailed the principles in an article published in 1996 in the Cutter Business Technology Journal (article entitled Controlled Chaos: Living on the Edge).

What are the missions of the Scrum Alliance?

Founded in 2001, the Scrum Alliance is a non-profit organization whose mission is to promote the agile movement through certifications. Led by members of the agile community, it fuels debate and research in this field.

What are the scrum master certifications?

There are several recognized certifications in this field. They aim to validate and label Scrum skills. Historically, the Scrum Alliance was the first non-profit organization to offer certification training in this field. There are three levels: Certified ScrumMaster, Advanced Certified ScrumMaster and Certified Scrum Professional ScrumMaster. The same logic applies to the product owener via Certified Scrum Product Owner, Advanced Certified Scrum Product Owner and Certified Scrum Professional Product Owner.

In france, Agilbee, a training company specialized in agile methods, offers the Scrum Alliance courses and certifications in French. The Scrum League’s Scrum certifications are also available in French.

The main alternative to the Scrum Alliance’s offer is Scrum.org, which has also set up Scrum certifications, particularly through the Professional Scrum Master and Professional Scrum Product Owner programs. Available only in English, they have the advantage of having been designed directly by the authors of Scrum, Ken Schwaber and Jeff Sutherland.

Scrum vs Kanban

Often put in competition, the Scrum and Kanban agile methods are much more complementary than they seem. The first aims to split product development processes into several cycles. The main objective of the latter is to limit the waste of time and energy by limiting the number of production tasks to be carried out.

While Scrum is adapted to the management of a single project, Kanban is better suited to the management of several projects or to TMA (third-party application maintenance) and MCO (maintenance in operational condition).

Scrum vs Safe

If Scrum remains the most popular agile method at the moment, it is followed by other methodologies that are more and more efficient in terms of agility. This is particularly true of the Scaled Agile Framework, more commonly known as Safe, which allows for more flexible work management in large companies.

Categories
Cloud computing

OVHCloud ticks the most boxes

Sovereign cloud comparison: OVHCloud ticks the most boxes

What are the criteria for a sovereign cloud? How do the major providers position themselves with respect to each? Here is an overview.

How to define today what is a sovereign cloud? JDN asked Philippe Latombe, a Modem deputy, a member of the National Assembly’s law commission and an expert on the cloud. Here is his answer: “It is a cloud located and operated by a French company. A company that has no connection with a foreign parent company, and which is therefore protected against extraterritorial legislation such as the American Cloud Act. The Cloud Act allows the US federal government to access data hosted by an American company, regardless of its location in the world (see the study by the American law firm Greenberg Traurig LLP).

“A sovereign cloud must also be backed by server and network equipment designed and assembled in france, with the main components also made in france, such as processors or memory,” adds Philippe Latombe. This is a precaution that will limit the risk of backdoors that could be used by the CIA under the FISA (Foreign Intelligence Surveillance Act). “To avoid any external interference, the supplier will finally propose a system to encrypt the customer’s data by giving him the possibility to use his own encryption keys”, adds the deputy.

Based on this definition, the JDN draws up below a comparison of the main cloud providers, French or not, present on our soil, by sifting for each of them all the criteria of sovereignty mentioned.

 
Granular encryption service*. Offering isolated from extraterritorial legislation Proprietary software platform made in france Servers and network equipment designed in france Servers assembled in france Processor made in france Secnum-Cloud
AWS X
Google Cloud X In project In project
Microsoft Azure In project In project
Oracle
Orange Flexible Cloud X
OVHCloud X X X X
Scaleway X X X
3DS Outscale X X X

* Encryption offering covering the main cloud services offered (virtual machines, storage, database services, container as a service, Kubernetes as a Service, Funtions as a Service…)

Of the 7 criteria analyzed, OVHCloud is the one that meets the most, i.e. 4. In france, Octave Klaba’s group obviously offers a legal structure that isolates its offer from offshore regulations. It designs its own servers and assembles them in its factory in Croix in the North of france. This industrial infrastructure manufactures more than 80,000 servers every year. This policy of internalization allows OVH to optimize and above all to secure its supply chain to a large extent. On the other hand, the Roubaix-based group does not build the electronic components of its machines. As a result, it remains dependent on the vagaries of this market, particularly in the critical microprocessor segment. Not to mention the back doors that can creep in.

OVHCloud has also obtained the very select SecnumCloud certification awarded by the French National Agency for Information Systems Security (Anssi). A certification voluntarily selected among the sovereignty criteria analyzed. Why was this chosen? Because it brings the recognition of the French State as to “the quality and robustness of the service, the competence of the provider, and the trust that can be given to him” (says Anssi). The fact remains that this is OVH’s private cloud service, which, unlike its public cloud offering (based on an open source foundation), is based on the proprietary American platform VMware. On the other hand, 3DS Outscale has obtained the precious sesame for its public cloud infrastructure. However, the cloud subsidiary of Dassault Systèmes has chosen the NetApp storage system and Cisco network equipment. These are also American technologies. “SecnumCloud requires us to use devices to detect third-party network traffic (from, for example, spy-oriented sniffers embedded in U.S. technologies under FISAeditor’s note),” says David Chassan, Director of Strategy at 3DS Outscale.

Towards sovereign processors?

In terms of processors, the French sovereign cloud sector could be on the rise again in the wake of the Electronique france 2030 plan. Unveiled by the government in July, it plans to inject $5 billion into semiconductors, including $800 million into the next generation of 10 nanometer processors. With the IoT as a target but also the cloud, it is part of the second project of common European interest (PIIEC). A program that includes, in addition for france, 10 billion dollars of spending targeting about fifteen R&D projects in electronics and telecoms, as well as the construction of a dozen new factories or manufacturing lines for components. The combined ambition of the PIIEC and the Electronics france 2030 plan? To increase semiconductor production capacity in france by around 90% by 2027.

“The success of the S3NS projects will depend on the way in which their services are organized and framed”

Among semiconductor champions, there is the unavoidable STMicroelectronics, but above all Soitec, which targets the edge computing segment in particular. This positioning will become increasingly important with the growing trend towards decentralized cloud computing. Among server manufacturers, 2CRSI is a key player. A technology chosen by OVHCloud to equip its Asian datacenters.

Sovereign offers “illusory

“The issue of the sovereign cloud, which raises the question of the integrity of the security of the data entrusted to providers, is an essential issue that is recognized by all the players in the market, whether American, European or French,” explains Olivier Iteanu, a lawyer at the Paris bar and an expert on digital legislation. Some American cloud providers have gone so far as to appropriate the term “sovereign cloud” and integrate it into their marketing policy. This is notably the case for Microsoft and Oracle, which have both launched so-called “sovereign” European offerings. These solutions guarantee the localization of data in the customer’s country, the attachment of support to local teams, and even isolation from the supplier’s other cloud regions (“non-sovereign”).

“Here, the promise is illusory. It goes without saying that these services are not impervious to the Cloud Act, which takes precedence over any contract. With this legislation, the US is proposing a legal tool that legalizes industrial espionage and data capture,” insists Olivier Iteanu. “If a French aircraft manufacturer had the plans for one of its future models stored on an American cloud stolen, it will be able to turn against the latter, but it will then be able to benefit from the protection of the Cloud Act.

Trusted rather than sovereign clouds

For the attorney, SecnumCloud certification may be the solution that puts everyone on the same page. In its version 3.2 released in October 2021, SecnumCloud incorporates new requirements to ensure that the provider and the data it processes cannot be subject to non-European laws. Data localization, human resources, access control, information encryption, risk management, real-time incident detection… The Anssi reference framework is very detailed, even specifying requirements for the physical security of data centers.

By seeking to distribute their cloud via French third parties, Microsoft and Google aim to obtain the famous sesame. Microsoft will use Bleu, a joint venture created by Orange and Capgemini, to market its Azure cloud in france. As for the second, it has joined forces with Thales to create a joint venture (called S3NS) under French jurisdiction. “The success of the Bleu and S3NS projects will depend on how their services are organized and framed. In both cases, the teams and the cloud infrastructures will have to be entirely isolated from those of the publisher, in addition to being attached to very distinct legal structures aimed at guaranteeing a total seal with the Cloud Act,” warns Olivier Iteanu. The Azure offering marketed by Bleu should be launched by the end of September. As for S3NS, it is already being tested by a few companies in beta. Both companies describe their future offerings as a trusted cloud, not a sovereign cloud. A model for which they are far from ticking all the boxes.

Categories
Tech info

What is MLOps?

MLOps: what is it?

Short for machine learning operations, MLOps aims to design learning models suitable for deployment in production and then maintain them throughout their lifecycle.

What is MLOps?

MLOps aims to design and maintain machine learning models that can be used in the field. Like DevOps for applications, it involves mastering their entire life cycle. The goal? To take into account deployment constraints from the design and training stages of the model. Following the logic of agile methods, MLOps takes shape through the implementation of learning pipelines combined with model monitoring tools.

The MLOPs engineer is the protagonist. This emerging profession is the product of a cross between the data scientist and the data engineer.

What are the building blocks of MLOps?

MLOps requires the implementation of several bricks aiming to drive the entire machine learning cycle:

  • A reusable model store,
  • A reusable feature store,
  • A continuous integration and delivery (CI/CD) tool,
  • A model monitoring and traceability tool,
  • A collaborative environment.

What are the tools of MLOps?

Major MLOps tools include:

  • DataiKu (proprietary application),
  • DataRobot (proprietary application),
  • Domino Data (proprietary application),
  • Kubeflow (open source application created by Google),
  • Metaflow (open source application),
  • MLFlow (open source application),
MLOps tools comparison
Techno Experiment tracking and versioning AutoML Orchestration and deployment management Monitoring Collaboration
Dataiku x x x x
Datarobot x x x x
Domino Data x x x x
Kubeflow x
Metaflow x
MLFlow x x
Other solutions often mentioned: Algorithmia (acquired by DataRobot), Cnvrg.io, Polyaxon, Valohai and more recently Comet, Landing AI or Weights & Biases.

On the cloud provider side, AWS, Google and Microsoft Azure all integrate the MLOps dimension into their respective machine learning platforms, Amazon SageMaker for the first, Vertex for the second and Azure Machine Learning for the third.

Several MLOps training modules are offered online and in science faculties or engineering schools. The MLOps engineer is above all a data scientist. A data scientist training is the key to enter the profession. They must also master the rules of programming and software engineering.

Datascientest is for the moment the only institute to offer a training in MLOps referenced on the Training Application, a training allowing consequently a financing via its training account.

MLOps vs DevOps

DevOps, a contraction of Development (Dev) and Operations (Ops), combines two essential functions: application development and system engineering. The challenge is to take into account deployment constraints from the programming phase and thus improve the quality of the finished product. MLOps is derived from DevOps, but more specifically addresses machine learning oriented applications.

Categories
Cloud computing

Cloud Computing market

According to the December 2014 edition of PAC’s CloudIndex, the maturity of  companies with regard to the Cloud continues to grow and the adoption of its solutions has even jumped – due in particular to the prior underestimation of actual usage. As a result, 55% of  companies now say they use Cloud solutions, compared to 29% last June.

Companies are primarily using SaaS applications (54%). The IaaS offers, less democratized until now, are declared used by 46% of respondents. According to the firm, they are of particular interest to companies with less than 500 employees who use these solutions for application hosting (54%), testing (49%), and website hosting (46%). PaaS remains in the background (+6 points), mainly because it is “mainly confined to developers”.

But the Cloud is not just about solutions. Services are developing in parallel. The French firm estimates the value of services (consulting, integration …) marketed in 2013 at 1.2 billion dollars. And these expenses should grow by an average of 39% per year by 2018. In total, the French Cloud market should reach 5 billion dollars in late 2014 and exceed 7 billion in 2018.

The quest for agility – According to CloudIndex, “the need for flexibility and the desire to reduce costs are the main reasons for moving to the Cloud (66%), ahead of improving time to market (60%) and developing innovative products, solutions or approaches (59%)”

30% of companies are now formalizing real Cloud strategies: a result that would tend to demonstrate that “organizations are increasingly using the Cloud in an organized and strategic way rather than opportunistically.” This is also true for SaaS, which is no longer confined to less strategic areas. Nearly “eight out of ten organizations that use SaaS consider at least one of their SaaS applications to be strategic to their business,” according to the barometer, which “confirms that SaaS is not just a stopgap measure or an unimportant add-on.

Security – There are many reasons not to use the cloud, but the main one remains the same and far ahead of the others: security. These fears have become even more pronounced and are considered important by nearly two-thirds of respondents, compared to less than 50% six months earlier. However, these fears are often unfounded, according to PAC.

For the firm, this feeling of insecurity is “regularly fueled by high-profile operational incidents, hacker attacks, or even international espionage cases, such as that of the NSA.

Public cloud – For PAC, the need for proximity expressed by companies is reflected in the search for local service providers. “This desire to deal with suppliers is clearly illustrated by our surveys. The criterion of the location of the datacenters is on the rise,

The importance of proximity for users is more than tangible. “The firm assures us that this expectation includes both Cloud providers and service providers. Local roots are important, both in the public and private clouds.

Categories
Tech info

CIO: the question is not whether to change or not; it’s how to manage the change

CIO: the question is not whether to change or not; but how to manage change?

IT organizations are under pressure to further reduce the cost of IT operations, increase service levels, stay compliant and reduce risk while improving visibility into financial and operational decisions. And they must do so with IT infrastructures that are more complex, hybrid and sophisticated than ever before.

Managing change, whether driven by business or functional requirements or operational reasons (lifecycle management, patching, risk management…), is becoming proportionately more complex. To cope, the IT department needs strong management disciplines supported by integrated tools. To be able to manage change safely, taking into account everyone’s requirements, it is necessary to arm oneself with the necessary information to make the right decision.

Managing IT changes

The introduction of agile delivery of digital solutions and DevOps workflows, combined with the adoption of native and public cloud-hosted cloud architectures, has also changed the role of the change management discipline. These are deployed at a much faster pace, almost continuously, and traditional change management concepts need to be rethought. In the agile world, pre-approved standard changes are becoming the norm rather than traditional changes in the traditional way of working. This does not mean that the control function of change management is no longer needed, but that understanding and visibility of all these changes becomes more important. This information requires automated change discovery and detection solutions that are integrated into the DevOps value stream.

The move to DevOps introduces an additional challenge with respect to operational changes: the use of infrastructure as code (IaC) by agile development teams – using provisioning tools such as Terraform, for example – leads to increased sprawl, opacity, and even compliance issues if not governed properly, as these teams will no longer use or only partially use the consumption and compliance pathways put in place by IT. Overcoming these challenges requires a governance solution that has safeguards around the use of IaC. These then ensure that the infrastructure changes applied by agile teams are tracked, including their costs and expenses against budget, and that these changes are valid in light of IT compliance rules.

Inventory assets for controlled change management

Managing changes to the physical, virtual, financial and contractual aspects of IT assets on-premises or in the public cloud requires robust inventory, IT asset management (ITAM), software asset management (SAM) and cloud financial management policies as part of a broader enterprise service management (ESM) solution for IT organizations. No one likes unexpected expenses or fines related to software usage. Organizations need to manage software licenses across all IT platforms, including those hosted in the public cloud, to moderate licensing costs and reduce the risk of non-compliance. An up-to-date inventory of hardware and software across all IT environments is an essential foundation and can only be achieved effectively and efficiently through automated discovery, cost tracking and change management.

Another salient issue is how IT can contribute to corporate environmental goals as part of its ESG policies. Accurate visibility into all IT assets and their relationships, preferably with integration data from data center infrastructure management (DCIM) tools that monitor environmental metrics, can facilitate decision making for infrastructure consolidation, transformation, lifecycle management and more to reduce the carbon footprint.

Control and track

So, do we need to change? Absolutely. But change must be controlled, tracked and governed, and it must be done within the modern DevOps value stream. IT operations management solutions must provide the capabilities to do this: discover all hardware and software assets and their relationships, enable enterprise service management, including cloud design and deployment, ITAM and SAM. Support agile teams by providing continuous, real-time compliance and cost management for IaC-based changes. Service assurance and observability solutions further provide a feedback loop in the planning phase of the DevOps chain. This closes the loop and new changes can be undertaken.

In today’s agile way of working, IT change has become one of the “permanent” elements of the continuous everything paradigm, requiring constant visibility, cost tracking, and compliance validation.

Categories
Cloud computing

Cloud business model set for big changes, says VMware

The cloud business model will see big changes, according to VMware Enterprise blockchain may still be in experimental mode, but it could soon change the way applications and systems are designed, moving from an architecture managed by individual organizations to architectures in which applications and data are shared and secured across multiple entities – in essence, a truly decentralized form of computing.

There are many cloud service providers, but even more data centers. Do all these data centers, with countless amounts of underutilized computing power, represent an untapped pool of cloud computing power that could flatten the cloud ecosystem?

That’s according to Kit Colbert, CTO at VMware, who sees a much more decentralized future than currently exists. I recently had the opportunity to speak with him at VMware’s Explore conference in San Francisco last week, where he described the factors that are opening up enterprise computing.

Increasingly decentralized environments

One emerging scenario is applications built around blockchain or distributed ledger technologies, with their ability to enable trust among multiple participants, Kit Colbert relates. “Enterprise blockchain is very well aligned with our focus.”

Today, the focus is on distributed applications that are built and run with cloud-native or Kubernetes-based building blocks. However, the focus is more on decentralized environments today, he noted. Distributed architectures are supported by a single entity, but decentralized architectures are supported by multiple organizations.

Although both architectures support multiple application instances and a shared database, “the big difference is that in a decentralized architecture, different companies will be running some of those instances, instead of being run by a single organization,” he explained.

That means those organizations “probably won’t trust each other completely,” Kit Colbert continued. “That’s where blockchain comes in, to support those kinds of use cases.”

“The Airbnb of computing capacity,” according to Kit Colbert

While decentralized blockchain-based systems still represent a small fraction of VMware’s offerings, Kit Colbert expects that to grow as the technology develops.

Cloud computing itself is a heterogeneous mix, and will remain so. While public cloud computing is a big part of the future for many IT plans, on-premises environments still have their place, Kit Colbert believes.

“Even if a company was born in the cloud or moves to the cloud, we often find that they bring things back to the cloud. Often, for reasons of cost, compliance, security, locality, or sovereignty, it’s better to keep things in-house. Putting everything in the public cloud is not the right solution, keeping everything on premises is not the right solution. Instead, to be smart, you have to say, OK, what are the requirements of the application, and where is the best place to meet all those requirements?”

From a data center perspective, the technologies are now in place to support grid-like cloud resources, using not only cloud provider resources, but also the capabilities of shared private data centers offered in an open spot market – a sort of Airbnb of computing capacity. This includes the ability “to run a virtual machine that can be protected by an administrator,” says Kit Colbert. “We can apply that cryptographically, which we couldn’t do a few years ago, thanks to processor core changes.”

VMware once piloted a “cloud exchange” in which unused capacity in corporate data centers could be sold on an open market. The project was a learning experience for the company and helped identify potential problems,” says Kit Colbert.

Conducted among VMware’s cloud providers and platform partners, the main issue encountered during the pilot was security – moving data to unknown locations. “We can’t write unencrypted data to a hard drive that belongs to another customer,” says Kit Colbert. “That’s a red line – we have to have encryption. We also have to have a way to prevent the operator from accessing the virtual machine or its data, either at runtime or at rest.”

The CTO role is evolving

Providing security also introduces “liability issues for customer operators,” he continues. “They’re not going to want to sign indemnification clauses, and a whole bunch of legal and other things that we might get caught up on as well.”

Kit Colbert also discussed the evolving role of his profession, the chief technology officer, which often overlaps with chief information officers and chief digital officers. “The CTO is one of the least defined roles in the industry,” he believes. “It can be a vice president of engineering, a super sales engineer, an evangelist or a product manager… or it can be more of an individual contributor, more of an influencer, an architect type.”

Kit Colbert oversees innovation, ESG, and the core platforms and services that support the vendor’s business units. “In addition, I provide the overall technical strategy for the company: this is where we should be going as a company, and this is the outline of what we should be doing as a company.”

Categories
Cloud computing

Multi-cloud architectures is a new deal in cybersecurity

Multi-cloud architectures: a new deal in cybersecurity Over the past few years, the cloud revolution has profoundly transformed the IT business models of organizations across all industries. A majority of organizations now use multiple applications and cloud hosting services, integrated within a single information architecture.

This “multi-cloud” model has become popular due to its many operational advantages, but it is not without its many questions that need to be anticipated in order to reap its full benefits in complete security.

Common problems

The use of multi-cloud operates in different ways depending on the data management policies of each organization. While it is particularly common to use separate vendors for infrastructure, platform and application needs, many organizations are now using multiple Iaas, PaaS and SaaS services simultaneously.

This is a choice that corresponds to the desire to prevent too strong a dependence on one supplier, but which is explained above all by the technical adaptability that it allows. By opting for dedicated services for each need and suppliers specialized in each task, network administrators can design IT architectures that are perfectly tuned to business needs and always optimally sized. Financially, it’s also a way to take advantage of the fierce competition among cloud providers to get the best prices available for each service.

These undoubted operational advantages, however, pose many challenges, not the least of which is the considerable complexity of cybersecurity efforts. In a context of a significant increase in cyber attacks, linked in particular to geopolitical tensions and the new opportunities provided by the digitalization of companies, the accumulation of cloud services is also synonymous with an increase in potential security breaches. The interconnection between cloud services in multicloud architectures can lead to the uncontrolled circulation of sensitive data and personal data, the processing of which is now strictly regulated.

A comprehensive review of security policies

For organizations that are aware of these issues, the gradual adoption of the multi-cloud model must be accompanied by a regular review of security and data handling policies. As cloud solutions continue to evolve, the technical, legal and regulatory compliance of all services must be re-evaluated at regular intervals, taking into account the uses and criticality of the data exchanged.

Particular attention must be paid to the security of APIs due to the significant differences in maturity between providers in this area. Despite the high complexity of the data circulation pattern in multi-cloud environments, the objective must be to achieve a unified vision of the application ecosystem in order to deduce an appropriate security plan. A task in which the close cooperation of the company’s partners and its cloud providers is essential.

A lever for optimization

This process of constant reassessment of the infrastructure and its security should not be seen as a simple precautionary measure. Beyond cybersecurity, the superposition of sometimes redundant tools and the increasing complexity of IT architectures can sometimes put a strain on the productivity of both IT departments and business teams.

Because it provides a better understanding of the company’s application environment, rigorous management of multi-cloud issues can also be the starting point for optimizing security processes and business processes.

Categories
Tech info

Cloud: Arm deploys next-generation chips for data centers

Cloud: Arm deploys next-generation data center chip

Renowned chip designer Arm on Wednesday unveiled its next generation of data center chip technology. Dubbed Neoverse V2 and known by the codename “Demeter,” the new line of chips is meant to address the explosive growth of data from 5G and internet-connected gadgets. Arm’s platform includes the new V-series core and the CMN-700 mesh interconnect.

In the British designer’s traditional formula, Arm will create underlying intellectual property for this new generation of chips that other companies, such as Qualcomm or Apple, can adopt under license to create their own processor chips. In particular, Arm cites chip giant Nvidia’s latest Grace data center processor as being built using the Neoverse V2 design. “Grace will combine the power efficiency of V2 with the power efficiency of LPDDR5X memory to deliver twice the performance per watt of servers powered by traditional architectures,” Arm’s side argues.

As a reminder, Arm does not manufacture its own chips. The company does not have its own manufacturing facilities. Instead, it licenses its products to other companies, which it calls “partners. They use Arm’s architecture as a kind of template, building systems that use Arm’s cores as their core processors.

Many Samsung and Apple smartphones and tablets, and all devices with Qualcomm processors, use some of Arm’s intellectual property. For all that, while Arm’s technology powers most cell phones, it has made a strong push into data center processors, long dominated by Intel or AMD.

Categories
Cloud computing

Cloud Migration and Enterprise Architecture go hand in hand

Cloud Migration and Enterprise Architecture go hand in hand The “cloud only” philosophy seems to be gaining support in a growing number of organizations. But beware: a total migration of the entire information system (IS) to the cloud without any distinction is rarely a good choice. A decision that would make any enterprise architect “jump”, for whom such an option is tantamount to trivializing all IS applications and erasing their intrinsic differences: business value, lifecycle, complexity and, of course, the sensitivity of the data processed.

And this is all the more true given that cloud projects are far from neutral in budgetary terms. Choosing the cloud is not something to be taken lightly: it can make sense for some applications, and be totally useless, or even counter-productive, for others. The study must be carried out application by application: a role for the enterprise architect.

1. Manage cloud migration priorities

The usefulness to business teams – in other words, business criticality – is naturally the first thing to measure for a cloud migration: why build a project for an application that is rarely used or brings little value to the business? On the contrary, a core business application gains in durability, efficiency, flexibility and scalability by migrating to the cloud. This is the promise of this option: spending less time on infrastructure issues and more time on functional changes.

This value is also measured over time: a core business application that is at the end of its life cycle, despite its value, will not be migrated, but replaced by a “cloud native” application. Conversely, an application that is still immature or lacking in stability (functional or technical) should not be a priority for a cloud migration. In this context, the enterprise architect will be able to finely evaluate the life cycle of each application to decide whether it is appropriate to launch a migration project.

2. The complexity of the application, determining the project load

If a migration project is far from neutral for the organization, it is even less so the more complex the application concerned. This complexity is measured first and foremost in terms of the components of the application itself: the technology used for its development, the level of specific developments or customized features, etc.

Measuring these elements allows us to evaluate the workload of both the IT department and the business teams, who will be required to contribute to the migration, according to the possible types of strategy: “rehosting” (rehosting or switching), “replatforming” (to a native cloud platform), “re-purchasing” (new product), “refactoring” (or redesigning the architecture). On the other hand, it can also be decided to decommission the application (“retire”) or to keep it as is (“retain”). Together, these six strategies (“6Rs”) make up the possible options for cloud migration.

But the complexity of an application can also be assessed by its position in the information system and its interconnections with other elements of the IS. For example, a CRM or an ERP is inherently complex, because it interfaces with many other applications, and even with the company’s ecosystem. This also makes them more difficult to migrate. Here again, the work of the enterprise architect in terms of IS mapping is a valuable tool for evaluating migration possibilities.

3. Data sensitivity: managing risk and compliance

Finally, it is also the data that will have to guide the organization’s choices when it comes to cloud migration. Because the process is far from trivial in terms of governance, risk management and compliance (GRC). In some cases, this is simply not possible, at least not for public clouds: sensitive business sectors, operators of vital importance (OIV) or operators of essential services (OSE), public administrations, etc.

When it is possible to move data to the cloud, whatever the sector concerned, the enterprise architect must ensure that security aspects are managed effectively: migrating applications, and therefore data, does not mean delegating the entire security of the IS to a third party. On the contrary, it is even a matter of redoubling vigilance.