Titlu Managing Cloud Native Data on Kubernetes

Autor Jeff Carpenter
Categorie De specialitate
Subcategorie Calculatoare / IT

jeff-carpenter-managing-cloud-native-data-on-kubernetes-pdf

Managing Cloud Native Data on Kubernetes Jeff Carpenter download free PDF.

Do you work at solving data problems and find yourself faced with the need for mod‐ ernization? Is your cloud native application limited to the use of microservices and service mesh? If you deploy applications on Kubernetes without including data, you haven’t fully embraced cloud native. Every element of your application should embody the cloud native principles of scale, elasticity, self-healing, and observability, including how you handle data. Engineers that work with data are primarily con‐ cerned with stateful services, and this will be our focus: increasing your skills to man‐ age data in Kubernetes. By reading this book, our goal is to enrich your journey to cloud native data. If you are just starting with cloud native applications, then there is no better time to include every aspect of the stack. This convergence is the future of how we will consume cloud resources.

So what is this future we are creating together? For too long, data has been something that has lived outside of Kubernetes, creating a lot of extra effort and complexity. We will get into valid reasons for this, but now is the time to combine the entire stack to build applications faster at the needed scale. Based on current technology, this is very much possible. We’ve moved away from the past of deploying individual servers and towards the future where we will be able to deploy entire virtual data centers. Development cycles that once took months and years can now be managed in days and weeks. Open source components can now be combined into a single deployment on Kubernetes that is portable from your laptop to the largest cloud provider. The open source contribution isn’t a tiny part of this either. Kubernetes and the projects we talk about in this book are under the Apache License 2.0. unless otherwise noted, and for a good reason. If we build infrastructure that can run anywhere, we need a license model that gives us the freedom of choice. Open source is both free-asin-beer and free-as-in-freedom, and both count when building cloud native applica‐ tions on Kubernetes. Open source has been the fuel of many revolutions in infrastructure, and this is no exception.

That’s what we are building: the near future reality of fully realized Kubernetes appli‐ cations. The final component is the most important, and that is you. As a reader of this book, you are one of the people that will create this future. Creating is what we do as engineers. We continuously re-invent the way we deploy complicated infrastruc‐ ture to respond to the increased demand. When the first electronic database system was put online in 1960 for American Airlines, there was a small army of engineers who made sure it stayed online and worked around the clock. Progress took us from mainframes to minicomputers, to microcomputers, and eventually to the fleet man‐ agement we do today. Now, that same progression is continuing into cloud native and Kubernetes. This chapter will examine the components of cloud native applications, the challenges of running stateful workloads, and the essential areas covered in this book. To get started, let’s turn to the building blocks that make up data infrastructure.

Stateless services

These are services that maintain information only for the immediate life cycle of the active request—for example, a service for sending formatted shopping cart information to a mobile client. A typical example is an application server that performs the business logic for the shopping cart. However, the information about the shopping cart contents resides external to these services. They only need to be online for a short duration from request to response. The infrastruc‐ ture used to provide the service can easily grow and shrink with little impact on the overall application, scaling compute and network resources on-demand when needed. Since we are not storing critical data in the individual service, they can be created and destroyed quickly with little coordination. Stateless services are a crucial architecture element in distributed systems

Stateful services

These services need to maintain information from one request to the next. Disks and memory store data for use across multiple requests. An example is a database or file system. Scaling stateful services is much more complex since the informa‐ tion typically requires replication for high availability. This creates the need for consistency and mechanisms to keep data in sync between replicas. These serv‐ ices usually have different scaling methods, both vertical and horizontal. As a result, they require different sets of operational tasks than stateless services.