共计 1919 个字符,预计需要花费 5 分钟才能阅读完成。
今天浏览openresty仓库,看到lua-nginx-module有一块内容:data-sharing-within-an-nginx-worker,其大意就是把数据共享范围控制在nginx各自的worker内
控制数据共享在各自worker内,可以使用module来完成,因为Lua module只加载一次,并且所有coroutine将共享module内的数据。也就是说一个worker所处理的所有请求都可以共用一份数据,而不同worker的数据可以不一样
创建一个提供随机数的module
[root@nginx-cluster lua]# cat mydata.lua
local _M = {}
local data = {}
math.randomseed(tonumber(tostring(ngx.now()*1000):reverse():sub(1,9)))
data[i] = math.random(100)
function _M.get_num(i)
return data[i]
end
return _M
nginx入口location如下
location /getNum {
content_by_lua_block {
local mydata = require "mydata"
ngx.say(mydata.get_num(1))
}
}
测试运行
[root@nginx-cluster conf.d]# for i in `seq 200`;do curl -s 127.0.0.1:8084/getStr >> test.log;done
[root@nginx-cluster conf.d]# cat test.log | sort -n | uniq -c
25 10
19 49
15 56
141 96
测试下不同location的请求,此时需要新增个location
location /getNum {
content_by_lua_block {
local mydata = require "mydata"
ngx.say(mydata.get_num(1))
}
}
location /getStr {
content_by_lua_block {
local mydata = require "mydata"
ngx.say('xadocker nginx-worker-'..ngx.var.pid.." : "..mydata.get_num(1))
}
}
再次测试
[root@nginx-cluster conf.d]# >test.log
[root@nginx-cluster conf.d]# for i in `seq 200`;do curl -s 127.0.0.1:8084/getNum >> test.log;done
[root@nginx-cluster conf.d]# for i in `seq 200`;do curl -s 127.0.0.1:8084/getStr >> test.log;done
[root@nginx-cluster conf.d]# cat test.log | sort -n | uniq -c
16 xadocker nginx-worker-54681 : 48
22 xadocker nginx-worker-54682 : 72
134 xadocker nginx-worker-54683 : 10
28 xadocker nginx-worker-54684 : 94
126 10
17 48
31 72
26 94
# 查看nginx进程
[root@nginx-cluster conf.d]# ps -ef | grep nginx
root 54680 1 0 00:35 ? 00:00:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx
nobody 54681 54680 0 00:35 ? 00:00:00 nginx: worker process
nobody 54682 54680 0 00:35 ? 00:00:00 nginx: worker process
nobody 54683 54680 0 00:35 ? 00:00:00 nginx: worker process
nobody 54684 54680 0 00:35 ? 00:00:00 nginx: worker process
root 55554 7347 0 00:39 pts/1 00:00:00 grep --color=auto nginx
可以看到同一个进程,不同location也是共享数据的,此时的module只会在对location的第一次请求时加载和运行,之后便会对相同worker后续的所有请求都共享一块数据副本。只有Nginx master进程收到HUP信号才会重载
目前博主还没遇到这个场景可以用在哪。。。若是读者有相应场景可告知下(●ˇ∀ˇ●)
正文完