IT Cloud. Eugeny Shtoltc

Читать онлайн книгу.

IT Cloud - Eugeny Shtoltc


Скачать книгу
let's move on to implementing the NodeJS server:

      essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform destroy

      essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

      essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it –rm node: 12 which node

      / usr / local / bin / node

      sudo docker run -it –rm -p 8222: 80 node: 12 / bin / bash -c 'cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git &&

      / usr / local / bin / node /usr/src/nodejs-hello-world/index.js'

      firefox http: // localhost: 8222

      Let's replace the block in our container with:

      container {

      image = "node: 12"

      name = "node-js"

      command = ["/ bin / bash"]

      args = [

      "-c",

      "cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git && / usr / local / bin / node /usr/src/nodejs-hello-world/index.js "

      ]

      }

      If you comment out a Kubernetes module, and it remains in the cache, it remains to remove the excess from the cache:

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

      Error: Provider configuration not present

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list

      data.google_client_config.default

      module.Kubernetes.google_container_cluster.node-ks

      module.Kubernetes.google_container_node_pool.node-ks-pool

      module.nodejs.kubernetes_deployment.nodejs

      module.nodejs.kubernetes_service.nodejs

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_deployment.nodejs

      Removed module.nodejs.kubernetes_deployment.nodejs

      Successfully removed 1 resource instance (s).

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_service.nodejs

      Removed module.nodejs.kubernetes_service.nodejs

      Successfully removed 1 resource instance (s).

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

      module.Kubernetes.google_container_cluster.node-ks: Refreshing state … [id = node-ks]

      module.Kubernetes.google_container_node_pool.node-ks-pool: Refreshing state … [id = europe-west2-a / node-ks / node-ks-pool]

      Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

      Terraform Cluster Reliability and Automation

      For a general overview of automation, see https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index. html # 0. We will dwell in more detail. Now if we run the ./terraform destroy and try to recreate the entire infrastructure from the beginning , we will get errors. Errors are received due to the fact that the order of creation of services is not specified and terraform, by default, sends requests to the API in 10 parallel threads, although this can be changed by specifying or removing the -parallelism = 1 switch during application or removal . As a result, Terraform tries to create Kubernetes services (Deployment and service) on servers (node-pull) that do not yet exist, the same situation when creating a service that requires proxying a Deployment that has not yet been created. By telling Terraform to request the API in a single thread ./terraform apply -parallelism = 1, we reduce possible provider-side restrictions on the frequency of API calls, but we do not solve the problem of lack of order in which entities are created. We will not comment out dependent blocks and gradually uncomment and run ./terraform apply , nor will we run our system piece by piece by specifying specific blocks ./terraform apply -target = module.nodejs.kubernetes_deployment.nodejs . We will indicate in the code the dependencies themselves on the initialization of the variable, the first of which is already defined as external var.endpoint , and the second we will create locally:

      locals {

      app = kubernetes_deployment.nodejs.metadata.0.labels.app

      }

      Now we can add dependencies to the code depends_on = [var.endpoint] and depends_on = [kubernetes_deployment .nodejs] .

      The service unavailability error may also appear: Error: Get https: //35.197.228.3/API/v1 …: dial tcp 35.197.228.3:443: connect: connection refused , then the connection time has been exceeded, which is 6 minutes by default (3600 seconds), but here you can just try again.

      Now let's move on to solving the problem of the reliability of the container, the main process of which we run in the command shell. The first thing we will do is separate the creation of the application from the launch of the container. To do this, you need to transfer the entire process of creating a service into the process of creating an image, which can be tested, and by which you can create a service container. So let's create an image:

      essh @ kubernetes-master: ~ / node-cluster $ cat app / server.js

      const http = require ('http');

      const server = http.createServer (function (request, response) {

      response.writeHead (200, {"Content-Type": "text / plain"});

      response.end (`Nodejs_cluster is working! My host is $ {process.env.HOSTNAME}`);

      });

      server.listen (80);

      essh @ kubernetes-master: ~ / node-cluster $ cat Dockerfile

      FROM node: 12

      WORKDIR / usr / src /

      ADD ./app / usr / src /

      RUN npm install

      EXPOSE 3000

      ENTRYPOINT ["node", "server.js"]

      essh @ kubernetes-master: ~ / node-cluster $ sudo docker image build -t nodejs_cluster.

      Sending build context to Docker daemon 257.4MB

      Step 1/6: FROM node: 12

      ––> b074182f4154

      Step 2/6: WORKDIR / usr / src /

      ––> Using cache

      ––> 06666b54afba

      Step 3/6: ADD ./app / usr / src /

      ––> Using cache

      ––> 13fa01953b4a

      Step 4/6: RUN npm install

      ––> Using cache

      ––> dd074632659c

      Step 5/6: EXPOSE 3000

      ––> Using cache

      ––> ba3b7745b8e3

      Step 6/6: ENTRYPOINT ["node", "server.js"]

      ––> Using cache

      ––> a957fa7a1efa

      Successfully built a957fa7a1efa

      Successfully tagged nodejs_cluster: latest

      essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster

      nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

      Now let's put our image in the GCP registry, and not the Docker Hub, because we immediately get a private repository with which our services automatically have access:

      essh @ kubernetes-master: ~ / node-cluster $ IMAGE_ID = "nodejs_cluster"

      essh @ kubernetes-master: ~ / node-cluster $ sudo docker tag $ IMAGE_ID: latest gcr.io/$PROJECT_ID/$IMAGE_ID:latest

      essh


Скачать книгу