IT Cloud. Eugeny Shtoltc

Читать онлайн книгу.

IT Cloud - Eugeny Shtoltc


Скачать книгу
@ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster

      nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

      gcr.io/node-cluster-243923/nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

      essh @ kubernetes-master: ~ / node-cluster $ gcloud auth configure-docker

      gcloud credential helpers already registered correctly.

      essh @ kubernetes-master: ~ / node-cluster $ docker push gcr.io/$PROJECT_ID/$IMAGE_ID:latest

      The push refers to repository [gcr.io/node-cluster-243923/nodejs_cluster]

      194f3d074f36: Pushed

      b91e71cc9778: Pushed

      640fdb25c9d7: Layer already exists

      b0b300677afe: Layer already exists

      5667af297e60: Layer already exists

      84d0c4b192e8: Layer already exists

      a637c551a0da: Layer already exists

      2c8d31157b81: Layer already exists

      7b76d801397d: Layer already exists

      f32868cde90b: Layer already exists

      0db06dff9d9a: Layer already exists

      latest: digest: sha256: 912938003a93c53b7c8f806cded3f9bffae7b5553b9350c75791ff7acd1dad0b size: 2629

      essh @ kubernetes-master: ~ / node-cluster $ gcloud container images list

      NAME

      gcr.io/node-cluster-243923/nodejs_cluster

      Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.

      Now we can see it in the GCP admin panel: Container Registry -> Images. Let's replace the code of our container with the code with our image. If for production it is necessary to version the launched image in order to avoid their automatic update during system re-creation of PODs, for example, when transferring POD from one node to another when taking a machine with our node for maintenance. For development, it is better to use the latest tag , which will update the service when the image is updated. When you update the service, you need to recreate it, that is, delete and recreate it, since otherwise the terraform will simply update the parameters, and not recreate the container with the new image. Also, if we update the image and mark the service as modified with the command ./terraform taint $ {NAME_SERVICE} , our service will simply be updated, which can be seen with the command ./terraform plan . Therefore, to update, for now, you need to use the commands ./terraform destroy -target = $ {NAME_SERVICE} and ./terraform apply , and the name of the services can be found in ./terraform state list :

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list

      data.google_client_config.default

      module.kubernetes.google_container_cluster.node-ks

      module.kubernetes.google_container_node_pool.node-ks-pool

      module.Nginx.kubernetes_deployment.nodejs

      module.Nginx.kubernetes_service.nodejs

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform destroy -target = module.nodejs.kubernetes_deployment.nodejs

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

      Now let's replace the code of our container:

      container {

      image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

      name = "node-js"

      }

      Let's check the result of balancing for different nodes (no line break at the end of the output):

      essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

      Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $

      We will automate the process of creating images, for this we will use the Google Cloud Build service (free for 5 users and traffic up to 50GB) to create a new image when creating a new version (tag) in the Cloud Source Repositories repository (free on Google Cloud Platform Free Tier). Google Cloud Platform -> Menu -> Tools -> Cloud Build -> Triggers -> Enable Cloud Build API -> Get Started -> Create a repository that will be available on Google Cloud Platform -> Menu -> Tools -> Source Code Repositories (Cloud Source Repositories):

      essh @ kubernetes-master: ~ / node-cluster $ cd app /

      essh @ kubernetes-master: ~ / node-cluster / app $ ls

      server.js

      essh @ kubernetes-master: ~ / node-cluster / app $ mv ./server.js ../

      essh @ kubernetes-master: ~ / node-cluster / app $ gcloud source repos clone nodejs –project = node-cluster-243923

      Cloning into '/ home / essh / node-cluster / app / nodejs' …

      warning: You appear to have cloned an empty repository.

      Project [node-cluster-243923] repository [nodejs] was cloned to [/ home / essh / node-cluster / app / nodejs].

      essh @ kubernetes-master: ~ / node-cluster / app $ ls -a

      … .. nodejs

      essh @ kubernetes-master: ~ / node-cluster / app $ ls nodejs /

      essh @ kubernetes-master: ~ / node-cluster / app $ ls -a nodejs /

      … .. .git

      essh @ kubernetes-master: ~ / node-cluster / app $ cd nodejs /

      essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ mv ../../server.js.

      essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add server.js

      essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'test server'

      [master (root-commit) 46dd957] test server

      1 file changed, 7 insertions (+)

      create mode 100644 server.js

      essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push -u origin master

      Counting objects: 3, done.

      Delta compression using up to 8 threads.

      Compressing objects: 100% (2/2), done.

      Writing objects: 100% (3/3), 408 bytes | 408.00 KiB / s, done.

      Total 3 (delta


Скачать книгу