模板文件设置如下
upstream test {
server 127.0.0.1:11111;
upsync 192.168.80.142:8500/v1/kv/bnspear/upstream/test/vitality upsync_timeout=6m upsync_interval=500ms upsync_type=consul strong_dependency=off;
upsync_conf_path /tmp/test.conf;
keepalive 512;
check interval=3000 rise=2 fall=3 timeout=2000 type=tcp;
}
同步后/tmp/test.conf中keepalive的数值会自动改变,并且这个数值会随着反向代理的node数量相等
upstream test {
keepalive 4;
upsync 192.168.80.142:8500/v1/kv/bnspear/upstream/zeus.eus/vitality upsync_interval=500ms upsync_timeout=360000ms upsync_type=consul;
upsync_conf_path /tmp/test.conf;
server 192.168.11.2:15200 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.11.3:15200 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.11.4:15200 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.11.5:15200 weight=1 max_fails=3 fail_timeout=30s;
check interval=3000 rise=2 fall=3 timeout=2000 type=tcp default_down=true;
check_keepalive_requests 1;
check_http_send "";
check_http_expect_alive http_2xx http_3xx;
}
模板文件设置如下
upstream test {
server 127.0.0.1:11111;
}
同步后/tmp/test.conf中keepalive的数值会自动改变,并且这个数值会随着反向代理的node数量相等
upstream test {
keepalive 4;
}