Comer | The Cloud Computing Book | Buch | 978-0-367-70680-7 | sack.de

Buch, Englisch, 288 Seiten, Format (B × H): 188 mm x 236 mm, Gewicht: 1800 g

Comer

The Cloud Computing Book

The Future of Computing Explained
1. Auflage 2021
ISBN: 978-0-367-70680-7
Verlag: Taylor & Francis Ltd

The Future of Computing Explained

Buch, Englisch, 288 Seiten, Format (B × H): 188 mm x 236 mm, Gewicht: 1800 g

ISBN: 978-0-367-70680-7
Verlag: Taylor & Francis Ltd


This latest textbook from bestselling author, Douglas E. Comer, is a class-tested book providing a comprehensive introduction to cloud computing. Focusing on concepts and principles, rather than commercial offerings by cloud providers and vendors, The Cloud Computing Book: The Future of Computing Explained gives readers a complete picture of the advantages and growth of cloud computing, cloud infrastructure, virtualization, automation and orchestration, and cloud-native software design.

The book explains real and virtual data center facilities, including computation (e.g., servers, hypervisors, Virtual Machines, and containers), networks (e.g., leaf-spine architecture, VLANs, and VxLAN), and storage mechanisms (e.g., SAN, NAS, and object storage). Chapters on automation and orchestration cover the conceptual organization of systems that automate software deployment and scaling. Chapters on cloud-native software cover parallelism, microservices, MapReduce, controller-based designs, and serverless computing. Although it focuses on concepts and principles, the book uses popular technologies in examples, including Docker containers and Kubernetes. Final chapters explain security in a cloud environment and the use of models to help control the complexity involved in designing software for the cloud.

The text is suitable for a one-semester course for software engineers who want to understand cloud, and for IT managers moving an organization’s computing to the cloud.

Comer The Cloud Computing Book jetzt bestellen!

Zielgruppe


Undergraduate Core


Autoren/Hrsg.


Weitere Infos & Material


Preface

PART I The Era Of Cloud Computing

The Motivations For Cloud

1.1 Cloud Computing Everywhere

1.2 A Facility For Flexible Computing

1.3 The Start Of Cloud: The Power Wall And Multiple Cores

1.4 From Multiple Cores To Multiple Machines

1.5 From Clusters To Web Sites And Load Balancing

1.6 Racks Of Server Computers

1.7 The Economic Motivation For A Centralized Data Center

1.8 Origin Of The Term “In The Cloud”

1.9 Centralization Once Again

Elastic Computing And Its Advantages

2.1 Introduction

2.2 Multi-Tenant Clouds

2.3 The Concept Of Elastic Computing

2.4 Using Virtualized Servers For Rapid Change

2.5 How Virtualized Servers Aid Providers

2.6 How Virtualized Servers Help A Customer

2.7 Business Models For Cloud Providers

2.8 Intrastructure as a Service (IaaS)

2.9 Platform as a Service (PaaS)

2.10 Software as a Service (SaaS)

2.11 A Special Case: Desktop as a Service (DaaS)

2.12 Summary

Type Of Clouds And Cloud Providers

3.1 Introduction

3.2 Private And Public Clouds

3.3 Private Cloud

3.4 Public Cloud

3.5 The Advantages Of Public Cloud

3.6 Provider Lock-In

3.7 The Advantages Of Private Cloud

3.8 Hybrid Cloud

3.9 Multi-Cloud

3.10 Hyperscalers

3.11 Summary

PART II Cloud Infrastructure And Virtualization

Data Center Infrastructure And Equipment

4.1 Introduction

4.2 Racks, Aisles, And Pods

4.3 Pod Size

4.4 Power And Cooling For A Pod

4.5 Raised Floor Pathways And Air Cooling

4.6 Thermal Containment And Hot/Cold Aisles

4.7 Exhaust Ducts (Chimneys)

4.8 Lights-Out Data Centers

4.9 A Possible Future Of Liquid Cooling

4.10 Network Equipment And Multi-Port Server Interfaces

4.11 Smart Network Interfaces And Offload

4.12 North-South And East-West Network Traffic

4.13 Network Hierarchies, Capacity, And Fat Tree Designs

4.14 High Capacity And Link Aggregation

4.15 A Leaf-Spine Network Design For East-West Traffic

4.16 Scaling A Leaf-Spine Architecture With A Super Spine

4.17 External Internet Connections

4.18 Storage In A Data Center

4.19 Unified Data Center Networks

4.20 Summary

Virtual Machines

5.1 Introduction

5.2 Approaches To Virtualization

5.3 Properties Of Full Virtualization

5.4 Conceptual Organization Of VM Systems

5.5 Efficient Execution And Processor Privilege Levels

5.6 Extending Privilege To A Hypervisor

5.7 Levels Of Trust

5.8 Levels Of Trust And I/O Devices

5.9 Virtual I/O Devices

5.10 Virtual Device Details

5.11 An Example Virtual Device

5.12 A VM As A Digital Object

5.13 VM Migration

5.14 Live Migration Using Three Phase
5.15 Running Virtual Machines In An Application

5.16 Facilities That Make A Hosted Hypervisor Possible

5.17 How A User Benefits From A Hosted Hypervisor

5.18 Summary

Containers

6.1 Introduction

6.2 The Advantages And Disadvantages Of VMs

6.3 Traditional Apps And Elasticity On Demand

6.4 Isolation Facilities In An Operating System

6.5 Linux Namespaces Used For Isolation

6.6 The Container Approach For Isolated Apps

6.7 Docker Containers
6.8 Docker Terminology And Development Tools

6.9 Docker Software Components

6.10 Base Operating System And Files

6.11 Items In A Dockerfile

6.12 An Example Dockerfile

6.13 Summary

Virtual Networks

7.1 Introduction

7.2 Conflicting Goals For A Data Center Network

7.3 Virtual Networks, Overlays, And Underlays

7.4 Virtual Local Area Networks (VLANs)

7.5 Scaling VLANs To A Data Center With VXLAN

7.6 A Virtual Network Switch Within A Server

7.7 Network Address Translation (NAT)

7.8 Managing Virtualization And Mobility

7.9 Automated Network Configuration And Operation

7.10 Software Defined Networking

7.11 The OpenFlow Protocol

7.12 Programmable Networks

7.13 Summary

Virtual Storage

8.1 Introduction

8.2 Persistent Storage: Disks And Files

8.3 The Disk Interface Abstraction

8.4 The File Interface Abstraction

8.5 Local And Remote Storage 1
8.6 Two Types Of Remote Storage Systems

8.7 Network Attached Storage (NAS) Technology

8.8 Storage Area Network (SAN) Technology

8.9 Mapping Virtual Disks To Physical Disks

8.10 Hyper-Converged Infrastructure

8.11 A Comparison Of NAS and SAN Technology

8.12 Object Storage

8.13 Summary

PART III Automation And Orchestration

Automation

9.1 Introduction

9.2 Groups That Use Automation

9.3 The Need For Automation In A Data Center

9.4 An Example Deployment

9.5 What Can Be Automated?

9.6 Levels Of Automation

9.7 AIops: Using Machine Learning And Artificial Intelligence

9.8 A Plethora Of Automation Tools

9.9 Automation Of Manual Data Center Practices

9.10 Zero Touch Provisioning And Infrastructure As Code

9.11 Declarative, Imperative, And Intent-Based Specifications

9.12 The Evolution Of Automation Tools

9.13 Summary

Orchestration: Automated Replication And Parallelism

10.1 Introduction

10.2 The Legacy Of Automating Manual Procedures

10.3 Orchestration: Automation With A Larger Scope

10.4 Kubernetes: An Example Container Orchestration System

10.5 Limits On Kubernetes Scope

10.6 The Kubernetes Cluster Model

10.7 Kubernetes Pods

10.8 Pod Creation, Templates, And Binding Times

10.9 Init Containers

10.10 Kubernetes Terminology: Nodes And Control Plane

10.11 Control Plane Software Components

10.12 Communication Among Control Plane Components

10.13 Worker Node Software Components

10.14 Kubernetes Features 1
10.15 Summary

PART IV Cloud Programming Paradigms

The MapReduce Paradigm

11.1 Introduction

11.2 Software In A Cloud Environment

11.3 Cloud-Native Vs. Conventional Software

11.4 Using Data Center Servers For Parallel Processing

11.5 Tradeoffs And Limitations Of The Parallel Approach

11.6 The MapReduce Programming Paradigm

11.7 Mathematical Description Of MapReduce

11.8 Splitting Input

11.9 Parallelism And Data Size

11.10 Data Access and Data Transmission

11.11 Apache Hadoop

11.12 The Two Major Parts Of Hadoop

11.13 Hadoop Hardware Cluster Model

11.14 HDFS Components: DataNodes And A NameNode

11.15 Block Replication And Fault Tolerance

11.16 HDFS And MapReduce

11.17 Using Hadoop With Other File Systems

11.18 Using Hadoop For MapReduce Computations

11.19 Hadoop’s Support For Programming Languages

11.20 Summary

Microservices

12.1 Introduction

12.2 Traditional Monolithic Applications

12.3 Monolithic Applications In A Data Center

12.4 The Microservices Approach

12.5 The Advantages Of Microservices

12.6 The Potential Disadvantages of Microservices

12.7 Microservices Granularity

12.8 Communication Protocols Used For Microservices

12.9 Communication Among Microservices

12.10 Using A Service Mesh Proxy

12.11 The Potential For Deadlock

12.12 Microservices Technologies

12.13 Summary

Controller-Based Management Software
13.1 Introduction

13.2 Traditional Distributed Application Management

13.3 Periodic Monitoring

13.4 Managing Cloud-Native Applications

13.5 Control Loop Concept

13.6 Control Loop Delay, Hysteresis, And Instability

13.7 The Kubernetes Controller Paradigm And Control Loop

13.8 An Event-Driven Implementation Of A Control Loop

13.9 Components Of A Kubernetes Controller

13.10 Custom Resources And Custom Controllers

13.11 Kubernetes Custom Resource Definition (CRD)

13.12 Service Mesh Management Tools

13.13 Reactive Or Dynamic Planning

13.14 A Goal: The Operator Pattern

13.15 Summary

Serverless Computing And Event Processing

14.1 Introduction

14.2 Traditional Client-Server Architecture 1
14.3 Scaling A Traditional Server To Handle Multiple Clients

14.4 Scaling A Server In A Cloud Environment

14.5 The Economics Of Servers In The Cloud

14.6 The Serverless Computing Approach

14.7 Stateless Servers And Containers

14.8 The Architecture Of A Serverless Infrastructure

14.9 An Example Of Serverless Processing

14.10 Potential Disadvantages Of Serverless Computing

14.11 Summary

DevOps

15.1 Introduction

15.2 Software Creation And Deployment
15.3 The Realistic Software Development Cycle

15.4 Large Software Projects And Teams

15.5 Disadvantages Of Using Multiple Teams

15.6 The DevOps Approach

15.7 Continuous Integration (CI): A Short Change Cycle

15.8 Continuous Delivery (CD): Deploying Versions Rapidly

15.9 Cautious Deployment: Sandbox, Canary, And Blue/Green

15.10 Difficult Aspects Of The DevOps Approach

15.11 Summary

PART V Other Aspects Of Cloud

Edge Computing And IIoT

16.1 Introduction

16.2 The Latency Disadvantage Of Cloud

16.3 Situations Where Latency Matters

16.4 Industries That Need Low Latency

16.5 Moving Computing To The Edge

16.6 Extending Edge Computing To A Fog Hierarchy

16.7 Caching At Multiple Levels Of A Hierarchy

16.8 An Automotive Example

16.9 Edge Computing And IIoT

16.10 Communication For IIoT

16.11 Decentralization Once Again

16.12 Summary

Cloud Security And Privacy
17.1 Introduction

17.2 Cloud-Specific Security Problems

17.3 Security In A Traditional Infrastructure

17.4 Why Traditional Methods Do Not Suffice For The Cloud

17.5 The Zero Trust Security Model

17.6 Identity Management

17.7 Privileged Access Management (PAM)

17.8 AI Technologies And Their Effect On Security

17.9 Protecting Remote Access

17.10 Privacy In A Cloud Environment

17.11 Back Doors, Side Channels, And Other Concerns

17.12 Cloud Providers As Partners For Security And Privacy

17.13 Summary

Controlling The Complexity Of Cloud-Native Systems

18.1 Introduction

18.2 Sources Of Complexity In Cloud Systems

18.3 Inherent Complexity In Large Distributed Systems

18.4 Designing A Flawless Distributed System

18.5 System Modeling

18.6 Mathematical Models

18.7 An Example Graph Model To Help Avoid Deadlock

18.8 A Graph Model For A Startup Sequence

18.9 Modeling Using Mathematics

18.10 An Example TLA+ Specification

18.11 System State And State Changes

18.12 The Form Of A TLA+ Specification

18.13 Symbols In A TLA+ Specification

18.14 State Transitions For The Example

18.15 Conclusions About Temporal Logic Models

18.16 Summary

Index


Dr. Douglas Comer is a Distinguished Professor at Purdue University, an industry consultant, and internationally-acclaimed author. He served as the inaugural VP of Research at Cisco Systems, and maintains ties with industry. His books are used in industry and academia around the world. Comer is a Fellow of the ACM, a member of the Internet Hall of Fame, and the recipient of numerous teaching awards. His ability to make complex topics understandable gives his books broad appeal for a wide variety of audiences.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.