HAProxy max session limit Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Come Celebrate our 10 Year Anniversary!HaProxy + IIS pages gradually get slowerHaProxy - Http and SSL pass through configProxying websocket traffic from nginx 1.3.13 to socket.io (without SSL)Performance issues with Apache + Django + socket.io (long-polling) + ProxyHA-Proxy 301 re-direct: https to https://wwwHaProxy giving - 503 Service UnavailableHAProxy not logging all requestsSASL auth to LDAP behind HAPROXY with name mismatchesHow to secure HAProxy TCP stats socket? Needed for remote operationopenldap with haproxy - (ldap_result() failed: Can't contact LDAP server)
Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?
Did Kevin spill real chili?
When -s is used with third person singular. What's its use in this context?
The logistics of corpse disposal
Are my PIs rude or am I just being too sensitive?
Why is black pepper both grey and black?
How to bypass password on Windows XP account?
How to assign captions for two tables in LaTeX?
If a contract sometimes uses the wrong name, is it still valid?
Why are there no cargo aircraft with "flying wing" design?
Is there a service that would inform me whenever a new direct route is scheduled from a given airport?
Bonus calculation: Am I making a mountain out of a molehill?
If Jon Snow became King of the Seven Kingdoms what would his regnal number be?
What does the "x" in "x86" represent?
What would be the ideal power source for a cybernetic eye?
What causes the vertical darker bands in my photo?
Can a non-EU citizen traveling with me come with me through the EU passport line?
How can I make names more distinctive without making them longer?
Is it true that "carbohydrates are of no use for the basal metabolic need"?
Gastric acid as a weapon
If 'B is more likely given A', then 'A is more likely given B'
What are the pros and cons of Aerospike nosecones?
What do you call a plan that's an alternative plan in case your initial plan fails?
Is there a way in Ruby to make just any one out of many keyword arguments required?
HAProxy max session limit
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
Come Celebrate our 10 Year Anniversary!HaProxy + IIS pages gradually get slowerHaProxy - Http and SSL pass through configProxying websocket traffic from nginx 1.3.13 to socket.io (without SSL)Performance issues with Apache + Django + socket.io (long-polling) + ProxyHA-Proxy 301 re-direct: https to https://wwwHaProxy giving - 503 Service UnavailableHAProxy not logging all requestsSASL auth to LDAP behind HAPROXY with name mismatchesHow to secure HAProxy TCP stats socket? Needed for remote operationopenldap with haproxy - (ldap_result() failed: Can't contact LDAP server)
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.
With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.
What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?
Edit
To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io
with that server being generated using express
. This server.js
file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).
On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.
HAProxy routes the user to the same server based upon their IP hash (balance source
) - the rest of the options are kept to the OpsWorks default - see the config file below.
I guess what I need to know is if Cur
sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.
--
HAProxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 80000
#debug
#quiet
user haproxy
group haproxy
stats socket /tmp/haproxy.sock
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 80000
timeout client 60s # Client and server timeout must match the longest
timeout server 60s # time we may wait for a response from the server.
timeout queue 120s # Don't queue requests too long if saturated.
timeout connect 10s # There's no reason to change this one.
timeout http-request 30s # A complete request may never take that long.
option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
option abortonclose # enable early dropping of aborted requests from pending queue
option httpchk # enable HTTP protocol to check on servers health
stats auth strexm:OYk8834nkPOOaKstq48b
stats uri /haproxy?stats
# Set up application listeners here.
listen application 0.0.0.0:80
# configure a fake backend as long as there are no real ones
# this way HAProxy will not fail on a config check
balance source
server localhost 127.0.0.1:8080 weight 1 maxconn 5 check
Server.js
var express = require('express');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis( host: ###, port: 6379 ));
server.listen(80); // opsworks node.js server requires port 80
app.get('/', function (req, res)
res.sendfile('./index.html');
);
io.sockets.on('connection', function (socket)
socket.on('join', function(room)
socket.join(room);
);
socket.on('alert', function (room)
socket.in(room).emit('alert_dashboard');
);
socket.on('event', function(data)
socket.in(data.room).emit('event_dashboard', data);
);
);
Client
var socket = io.connect('http://haproxy_server_ip:80');
socket.on('connect', function()
socket.emit('join', room id #);
);
haproxy node.js socket session
add a comment |
I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.
With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.
What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?
Edit
To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io
with that server being generated using express
. This server.js
file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).
On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.
HAProxy routes the user to the same server based upon their IP hash (balance source
) - the rest of the options are kept to the OpsWorks default - see the config file below.
I guess what I need to know is if Cur
sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.
--
HAProxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 80000
#debug
#quiet
user haproxy
group haproxy
stats socket /tmp/haproxy.sock
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 80000
timeout client 60s # Client and server timeout must match the longest
timeout server 60s # time we may wait for a response from the server.
timeout queue 120s # Don't queue requests too long if saturated.
timeout connect 10s # There's no reason to change this one.
timeout http-request 30s # A complete request may never take that long.
option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
option abortonclose # enable early dropping of aborted requests from pending queue
option httpchk # enable HTTP protocol to check on servers health
stats auth strexm:OYk8834nkPOOaKstq48b
stats uri /haproxy?stats
# Set up application listeners here.
listen application 0.0.0.0:80
# configure a fake backend as long as there are no real ones
# this way HAProxy will not fail on a config check
balance source
server localhost 127.0.0.1:8080 weight 1 maxconn 5 check
Server.js
var express = require('express');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis( host: ###, port: 6379 ));
server.listen(80); // opsworks node.js server requires port 80
app.get('/', function (req, res)
res.sendfile('./index.html');
);
io.sockets.on('connection', function (socket)
socket.on('join', function(room)
socket.join(room);
);
socket.on('alert', function (room)
socket.in(room).emit('alert_dashboard');
);
socket.on('event', function(data)
socket.in(data.room).emit('event_dashboard', data);
);
);
Client
var socket = io.connect('http://haproxy_server_ip:80');
socket.on('connect', function()
socket.emit('join', room id #);
);
haproxy node.js socket session
add a comment |
I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.
With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.
What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?
Edit
To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io
with that server being generated using express
. This server.js
file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).
On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.
HAProxy routes the user to the same server based upon their IP hash (balance source
) - the rest of the options are kept to the OpsWorks default - see the config file below.
I guess what I need to know is if Cur
sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.
--
HAProxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 80000
#debug
#quiet
user haproxy
group haproxy
stats socket /tmp/haproxy.sock
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 80000
timeout client 60s # Client and server timeout must match the longest
timeout server 60s # time we may wait for a response from the server.
timeout queue 120s # Don't queue requests too long if saturated.
timeout connect 10s # There's no reason to change this one.
timeout http-request 30s # A complete request may never take that long.
option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
option abortonclose # enable early dropping of aborted requests from pending queue
option httpchk # enable HTTP protocol to check on servers health
stats auth strexm:OYk8834nkPOOaKstq48b
stats uri /haproxy?stats
# Set up application listeners here.
listen application 0.0.0.0:80
# configure a fake backend as long as there are no real ones
# this way HAProxy will not fail on a config check
balance source
server localhost 127.0.0.1:8080 weight 1 maxconn 5 check
Server.js
var express = require('express');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis( host: ###, port: 6379 ));
server.listen(80); // opsworks node.js server requires port 80
app.get('/', function (req, res)
res.sendfile('./index.html');
);
io.sockets.on('connection', function (socket)
socket.on('join', function(room)
socket.join(room);
);
socket.on('alert', function (room)
socket.in(room).emit('alert_dashboard');
);
socket.on('event', function(data)
socket.in(data.room).emit('event_dashboard', data);
);
);
Client
var socket = io.connect('http://haproxy_server_ip:80');
socket.on('connect', function()
socket.emit('join', room id #);
);
haproxy node.js socket session
I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.
With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.
What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?
Edit
To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io
with that server being generated using express
. This server.js
file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).
On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.
HAProxy routes the user to the same server based upon their IP hash (balance source
) - the rest of the options are kept to the OpsWorks default - see the config file below.
I guess what I need to know is if Cur
sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.
--
HAProxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 80000
#debug
#quiet
user haproxy
group haproxy
stats socket /tmp/haproxy.sock
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 80000
timeout client 60s # Client and server timeout must match the longest
timeout server 60s # time we may wait for a response from the server.
timeout queue 120s # Don't queue requests too long if saturated.
timeout connect 10s # There's no reason to change this one.
timeout http-request 30s # A complete request may never take that long.
option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
option abortonclose # enable early dropping of aborted requests from pending queue
option httpchk # enable HTTP protocol to check on servers health
stats auth strexm:OYk8834nkPOOaKstq48b
stats uri /haproxy?stats
# Set up application listeners here.
listen application 0.0.0.0:80
# configure a fake backend as long as there are no real ones
# this way HAProxy will not fail on a config check
balance source
server localhost 127.0.0.1:8080 weight 1 maxconn 5 check
Server.js
var express = require('express');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis( host: ###, port: 6379 ));
server.listen(80); // opsworks node.js server requires port 80
app.get('/', function (req, res)
res.sendfile('./index.html');
);
io.sockets.on('connection', function (socket)
socket.on('join', function(room)
socket.join(room);
);
socket.on('alert', function (room)
socket.in(room).emit('alert_dashboard');
);
socket.on('event', function(data)
socket.in(data.room).emit('event_dashboard', data);
);
);
Client
var socket = io.connect('http://haproxy_server_ip:80');
socket.on('connect', function()
socket.emit('join', room id #);
);
haproxy node.js socket session
haproxy node.js socket session
edited Apr 29 '15 at 23:53
J Young
asked Apr 29 '15 at 11:08
J YoungJ Young
10115
10115
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
add a comment |
Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close
. So in that case you have a large number of short lived connections.
A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns
for our web socket tear set to 500,000.
What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn
value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f686407%2fhaproxy-max-session-limit%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
add a comment |
I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
add a comment |
I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.
I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.
answered Apr 29 '15 at 18:14
dtoubelisdtoubelis
3,90612028
3,90612028
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
add a comment |
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.
– J Young
Apr 29 '15 at 23:12
add a comment |
Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close
. So in that case you have a large number of short lived connections.
A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns
for our web socket tear set to 500,000.
What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn
value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
add a comment |
Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close
. So in that case you have a large number of short lived connections.
A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns
for our web socket tear set to 500,000.
What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn
value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
add a comment |
Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close
. So in that case you have a large number of short lived connections.
A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns
for our web socket tear set to 500,000.
What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn
value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.
Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close
. So in that case you have a large number of short lived connections.
A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns
for our web socket tear set to 500,000.
What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn
value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.
answered Apr 29 '15 at 18:24
ZypherZypher
34.3k44492
34.3k44492
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
add a comment |
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!
– J Young
Apr 29 '15 at 23:40
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f686407%2fhaproxy-max-session-limit%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown