对OpenStack内存使用情况的分析

对OpenStack内存使用情况的分析

作者:张航东

版本:Kilo

本文主要用于个人学习、总结,欢迎转赞,但请务必注明作者和出处,感谢!


1. How many processes OpenStack hold?

In this chapter, the specs of OpenStack were 4U, 16G, just for example.


1.1 How to check the process number
At first, let’s use nova-api as an example, to explain how to check the process number.

nova-api has 1 main process, and 12 child processes. We can see them by the command:
[[email protected]~]# ps -eo pid,ppid,uid,vsz,rss,args | grep nova
对OpenStack内存使用情况的分析
 
We can find out its starting function by the following steps. (So do other services)
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析
 

In /nova/cmd/api.py
对OpenStack内存使用情况的分析
 
There are 3 parts in above codes.
1) “for” each “api” in CONF.enabled_apis. We can see there are 3 apis in the field in nova.conf, this mean all nova-api processes include 3 types of api (ec2, osapi_compute, metadata)
对OpenStack内存使用情况的分析
 
2) Server = service.WSGIService() initialize the server, and set workers which will define the number of each api. In nova/service.py:
对OpenStack内存使用情况的分析
 
The “worker” comes from “%s_workers” (ec2, osapi_compute, metadata) field in nova.conf. The value is as same as the number of CPUs available, and set by packstack when install OpenStack.
对OpenStack内存使用情况的分析

3) Launcher.launch_service(server, workers) start the child processes for each api, and the number of the processes are determined by the number of worker. In nova\openstack\common\service.py:
对OpenStack内存使用情况的分析

 

According to above analysis, we can summarize that:
nova-api process number = 1 (main process) + 4 (the number of CPUs) * ec2 api service + 4 * osapi service + 4 * metadata service = 13.

 

1.2     Controller
1.2.1     Nova
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析

The formula of total processes is: 6 + 4 * CPUs

1.2.2     Neutron
对OpenStack内存使用情况的分析
 
对OpenStack内存使用情况的分析
 
The formula of total processes is: 1 + 2 * CPUs


1.2.3     Cinder
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析
 
The formula of total processes is: 3 + 1 * CPUs + backend num


1.2.4     Keystone
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析

 

PS: no child process when both of public_workers and admin_workers are 1, because of the following restraint: (In /keystone/server/eventlet.py)
对OpenStack内存使用情况的分析

The formula of total processes is: 1 + 2 * CPUs


1.2.5     Glance
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析

The formula of total processes is: 2 + 2 * CPUs


1.2.6     Ceilometer
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析

The formula of total processes is: 6 + 2 * CPUs (>1)


1.2.7     Httpd
对OpenStack内存使用情况的分析

Only 3 processes belong to OpenStack, and we can change it in “/etc/httpd/conf.d/15-horizon_vhost.conf”:
对OpenStack内存使用情况的分析
 
1.2.8     Others
And, there is only 1 process for each other backend services (mariadb, mongod, rabbitmq, redis, memcached).

1.3     Compute
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析

In every compute node, each OpenStack service only holds 1 process.

 

2.     How much memory OpenStack consume?
According to the above analysis, we known the number of processes of each OpenStack service are decided by CPUs number in default.

So, this chapter, we will see the memroy consumption from each of them to all of controller.


2.1     Pre-testing
First of all, I wrote a script to get the maximum memory consumption while install OpenStack, and made some practices and get some conclusions.


Below show the point when most memeory consumed while install.
对OpenStack内存使用情况的分析

2.1.1     Case 1 - diff cpu, same memory
Installed 6 OpenStack environments with different CPU specs (4u/8u/16u/24u/36u/48u), and same memory specs (16g) . Then, got following datas from controller: 
对OpenStack内存使用情况的分析

Assumption 1: Memory consumption on controller will conspicuously increase with CPU number.

And, after installed, when I keep running “free” command on controller, I found the memory consumption would keep raising.
对OpenStack内存使用情况的分析

Assumption 2: Controller may need more memroy for running than installing

2.1.2     Case 2 - same cpu, diff memory
Installed 4 OpenStack environments with same CPU specs (16u), and different memory specs (8g/16g/32g/64g). Then, got following datas from controller:
对OpenStack内存使用情况的分析
 
Assumption 3: Memory consumption on controller will increase with memory too, but inconspicuously.


Assumption 4: If reinstall OpenStack after destroy, while install, it will need less memory than the first time.

2.1.3     Case 3 - diff number of compute nodes

Installed 5 OpenStack environments with same CPU specs (16u), same memory specs (16g), but different number of compute nodes (1/2/4/8/16).  Then, got following datas from controller:
对OpenStack内存使用情况的分析

 

Assumption 5: Memory consumption on controller will increase with the number of compute nodes.

2.1.4     Case 4 - diff capacities of compute nodes
对OpenStack内存使用情况的分析
对OpenStack内存使用情况的分析
 
Assumption 6: the memory consumption on compute are different under different capacities. and much less than total memory.

2.2     Consumption of each process
For the memory consumption of each process, does it affected by CPUs number or memory?

According to my verification. The answer is No.
 
2.2.1     Controller
The following table show the average memory consumption of each type of process.
对OpenStack内存使用情况的分析
 
对OpenStack内存使用情况的分析
 
对OpenStack内存使用情况的分析
 
对OpenStack内存使用情况的分析
 
对OpenStack内存使用情况的分析
 
对OpenStack内存使用情况的分析

The memory consumption of other services (httpd, mariadb, mongod, rabbitmq, redis, memcached) has less difference (<50M in total) between different VCUs number.

2.2.2     Compute
对OpenStack内存使用情况的分析
 


2.3     Consumption in theory (controller)
According to above datas. We can see, in different CPUs number, the difference of memory consumption is main decided by OpenStack processes (mark with green shade).
To summarize them, we can get a formula:
The difference of memory consumption (RSS) / CPU =
    (69180 + 123624 + 89988 + 83380) + (59652 + 56136) + (57648) + (77188 + 79476) + (60920 + 60432)
                            Nova                                    neutron            cinder           keystone                  glance
    ≈ 818 MB
This mean, if we install Opentack on the Host/VM which has one more CPU than another, in theory, it will consume more 818 MB memory.

But, in deed, the value in actual environment is much lesser than it.

2.4     Consumption in deed (controller)

I built an environment (16u/16u, 1 compute) and manually changed the “workers” in conf file to simulate different CPU number. And got the following “used” memeory 1 hour after restarted OpenStack service:
对OpenStack内存使用情况的分析

Then, we can see a smooth increasing line. And summarize:
对OpenStack内存使用情况的分析
 
The difference of memory consumption / CPU
    = ((13564544 - 12907380) + (12907380 - 12573920) + (12573920 - 11927552) + …… + (32315602432376)) / (2 * 23)
               48U            46U                46U             44U               44U             42U                            4U             2U
    = ((13564544 - 2432376) / (2 * 23)
                48U          2U
    ≈ 242 MB