Вход/Регистрация
IT Cloud
вернуться

Shtoltc Eugeny

Шрифт:

For standard programs: no need to install, no need to maintain

We often use a huge number of ready-made solutions. When choosing a solution, we are faced with a dilemma: on the one hand, it is more universal and more proven than we can afford to do, on the other hand, it is complex enough to figure out how to properly install and configure it ourselves, in order to install all dependencies, resolve conflicts, set up for initial use. Now installation and configuration has become much easier, standardized, low-level problems are largely absent. But before we continue, let's digress and take a look at the process from getting started to starting to use the app within the story:

* In those days, when all programs were written in assembler, the programs were distributed by mail, users had already installed and tested them, because testing in the companies was not provided. In case of problems, the user informed the developer about the problems to the company and, after fixing them, received by mail the already corrected version on the disk. The process is very long and the user tested it himself.

* During the distribution on disks, companies already wrote their software products in higher-level languages, tested them for different OS versions. Hereinafter, we will consider free software. The program already contained a MakeFile, which itself compiled and installed the program.

* Since the advent of the Internet, software is massively installed using package managers, when they exit, it is downloaded and installed from the remote OS repository. He tries to monitor and maintain the compatibility of the compatibility of programs. Further study and use of the program: how to start it, how to configure it, how to understand that it works falls on the user or the system administrator.

* With the advent of Docker Hub and WEB, applications are downloaded and run by a container. It usually does not need to be configured for initial operation.

For containers and images in general, the server can adjust the amount of free space and the occupied space. By default, 10G is allocated for all containers and images, while this volume should remain as dm.min_free_space = 5%, but it is better to put it in the config, which may have to be created as /etc/docker/daemon.json :

{

"storage-opts": [

"dm.basesize = 50G",

"dm.min_free_space = 5%",

]

}

You can limit the resources consumed by the container in its settings:

* -m 256m – maximum size of RAM consumption (here 256Mb);

* -c 512 – CPU usage priority weight (1024 by default);

* —Cpuset = "0,1" – numbers of allowed processor cores.

Product transfer and distribution

To transfer a project, for example, to a customer, and distribute it between developers and servers, you can use installation scripts, archives, images, and containers. Each of these ways to distribute a project has its own characteristics, disadvantages and advantages. Let's talk about them and compare.

lines, but the main thing is that it has a special mode, enabled by the -p switch , which dynamically outputs the number of lines we need, when new ones arrive, it updates the output, for example, docker logs name_container | tail -p .

When there are too many applications to manually monitor their work separately, it is advisable to centralize application logs. For centralization, numerous programs can be used that collect logs from different services and send them to a central repository, for example, Fluentd. It is convenient to use ElasticSearch to store logs, simply by writing them to a search engine. It is highly desirable that the logs are in a structured format – JSON. This will allow you to sort them, select the ones you need, identify trends using built-in aggregate functions, perform analysis and forecasting, and not just search by text. For analysis, the Kubana web interface included in the Elastic stack.

Logging is important not only for long-running applications. So for test containers, it is convenient to get the output of the passed dough. This can be done by writing in the Dockerfile in the CMD section: NPM run, which will run the tests.

Image storage:

* public and private Docker Hub

* for private and secret projects, you can create your own image repository. The image is called registry

Docker for building apps and one-off jobs

Unlike virtual machines, launching, which is associated with significant human and computational costs, Docker is often used to perform one-time actions when software needs to be launched one-time, and it is desirable not to spend effort on installing and removing it. To do this, a container is launched, which is mounted to the folder with our application, which performs the required actions on it and, after they are completed, is deleted. An example is a JavaScript project for which you need to build and run tests. At the same time, the project itself does not use NodeJS, but contains only collector configs, for example, WEBPack, and written tests. To do this, we start the build container in iterative mode, in which you can control the build process, if necessary, and after the build is completed, the container will stop and delete itself, for example, you can run something like this at the root of the application: docker run -it –rm -v $ (pwd): app node-build . Tests can be carried out in a similar way. As a result, the application is built and tested on a test server, but the software that is not required for its operation on the production server will not be adopted and will not consume resources, and can be transferred to the production server, for example, using a container. In order not to write documentation on starting the build and testing, you can put two corresponding configs Docker-compose-build.yml and Docker-compose-test.yml and call them Docker-compose up -f ./docker-compose-build.

Management and access

We manage containers using the Docker command . Now, let's say there is a need to manage remotely. Using VNC, SSH, or something else to manage your Docker team will probably be too time consuming if the task gets complicated. That's right, first you will need to figure out what Docker is, because the Docker command and the Docker program are not the same thing, or rather, the Docker command is a console client for managing the Docker Engine client-server application. The team interacts with the Docker Machine server through the Docker REST API, which is intended for remote interaction with the server. But, in this case, you need to take care of authorization and SSL-encryption of traffic. This is ensured by the creation of keys, but in general, if the task is centralized management, differentiation of rights and security, it is better to look towards products that initially provide this and use Docker as a container launch, and not as a system.

  • Читать дальше
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • ...

Ебукер (ebooker) – онлайн-библиотека на русском языке. Книги доступны онлайн, без утомительной регистрации. Огромный выбор и удобный дизайн, позволяющий читать без проблем. Добавляйте сайт в закладки! Все произведения загружаются пользователями: если считаете, что ваши авторские права нарушены – используйте форму обратной связи.

Полезные ссылки

  • Моя полка

Контакты

  • chitat.ebooker@gmail.com

Подпишитесь на рассылку: