HAProxy max session limit Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Come Celebrate our 10 Year Anniversary!HaProxy + IIS pages gradually get slowerHaProxy - Http and SSL pass through configProxying websocket traffic from nginx 1.3.13 to socket.io (without SSL)Performance issues with Apache + Django + socket.io (long-polling) + ProxyHA-Proxy 301 re-direct: https to https://wwwHaProxy giving - 503 Service UnavailableHAProxy not logging all requestsSASL auth to LDAP behind HAPROXY with name mismatchesHow to secure HAProxy TCP stats socket? Needed for remote operationopenldap with haproxy - (ldap_result() failed: Can't contact LDAP server)

Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?

Did Kevin spill real chili?

When -s is used with third person singular. What's its use in this context?

The logistics of corpse disposal

Are my PIs rude or am I just being too sensitive?

Why is black pepper both grey and black?

How to bypass password on Windows XP account?

How to assign captions for two tables in LaTeX?

If a contract sometimes uses the wrong name, is it still valid?

Why are there no cargo aircraft with "flying wing" design?

Is there a service that would inform me whenever a new direct route is scheduled from a given airport?

Bonus calculation: Am I making a mountain out of a molehill?

If Jon Snow became King of the Seven Kingdoms what would his regnal number be?

What does the "x" in "x86" represent?

What would be the ideal power source for a cybernetic eye?

What causes the vertical darker bands in my photo?

Can a non-EU citizen traveling with me come with me through the EU passport line?

How can I make names more distinctive without making them longer?

Is it true that "carbohydrates are of no use for the basal metabolic need"?

Gastric acid as a weapon

If 'B is more likely given A', then 'A is more likely given B'

What are the pros and cons of Aerospike nosecones?

What do you call a plan that's an alternative plan in case your initial plan fails?

Is there a way in Ruby to make just any one out of many keyword arguments required?



HAProxy max session limit



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
Come Celebrate our 10 Year Anniversary!HaProxy + IIS pages gradually get slowerHaProxy - Http and SSL pass through configProxying websocket traffic from nginx 1.3.13 to socket.io (without SSL)Performance issues with Apache + Django + socket.io (long-polling) + ProxyHA-Proxy 301 re-direct: https to https://wwwHaProxy giving - 503 Service UnavailableHAProxy not logging all requestsSASL auth to LDAP behind HAPROXY with name mismatchesHow to secure HAProxy TCP stats socket? Needed for remote operationopenldap with haproxy - (ldap_result() failed: Can't contact LDAP server)



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



Edit



To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



http://puu.sh/hvHmb/52177d1e0d.png



I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



--



HAProxy.cfg



global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 80000
#debug
#quiet
user haproxy
group haproxy
stats socket /tmp/haproxy.sock

defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 80000
timeout client 60s # Client and server timeout must match the longest
timeout server 60s # time we may wait for a response from the server.
timeout queue 120s # Don't queue requests too long if saturated.
timeout connect 10s # There's no reason to change this one.
timeout http-request 30s # A complete request may never take that long.
option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
option abortonclose # enable early dropping of aborted requests from pending queue
option httpchk # enable HTTP protocol to check on servers health
stats auth strexm:OYk8834nkPOOaKstq48b
stats uri /haproxy?stats

# Set up application listeners here.
listen application 0.0.0.0:80
# configure a fake backend as long as there are no real ones
# this way HAProxy will not fail on a config check
balance source
server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


Server.js



var express = require('express');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis( host: ###, port: 6379 ));

server.listen(80); // opsworks node.js server requires port 80

app.get('/', function (req, res)
res.sendfile('./index.html');
);

io.sockets.on('connection', function (socket)
socket.on('join', function(room)
socket.join(room);
);

socket.on('alert', function (room)
socket.in(room).emit('alert_dashboard');
);

socket.on('event', function(data)
socket.in(data.room).emit('event_dashboard', data);
);
);


Client



var socket = io.connect('http://haproxy_server_ip:80');
socket.on('connect', function()
socket.emit('join', room id #);
);









share|improve this question






























    0















    I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



    With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



    What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



    Edit



    To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



    On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



    HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



    http://puu.sh/hvHmb/52177d1e0d.png



    I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



    --



    HAProxy.cfg



    global
    log 127.0.0.1 local0
    log 127.0.0.1 local1 notice
    #log loghost local0 info
    maxconn 80000
    #debug
    #quiet
    user haproxy
    group haproxy
    stats socket /tmp/haproxy.sock

    defaults
    log global
    mode http
    option httplog
    option dontlognull
    retries 3
    option redispatch
    maxconn 80000
    timeout client 60s # Client and server timeout must match the longest
    timeout server 60s # time we may wait for a response from the server.
    timeout queue 120s # Don't queue requests too long if saturated.
    timeout connect 10s # There's no reason to change this one.
    timeout http-request 30s # A complete request may never take that long.
    option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
    option abortonclose # enable early dropping of aborted requests from pending queue
    option httpchk # enable HTTP protocol to check on servers health
    stats auth strexm:OYk8834nkPOOaKstq48b
    stats uri /haproxy?stats

    # Set up application listeners here.
    listen application 0.0.0.0:80
    # configure a fake backend as long as there are no real ones
    # this way HAProxy will not fail on a config check
    balance source
    server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


    Server.js



    var express = require('express');
    var app = express();
    var server = require('http').Server(app);
    var io = require('socket.io')(server);
    var redis = require('socket.io-redis');
    io.adapter(redis( host: ###, port: 6379 ));

    server.listen(80); // opsworks node.js server requires port 80

    app.get('/', function (req, res)
    res.sendfile('./index.html');
    );

    io.sockets.on('connection', function (socket)
    socket.on('join', function(room)
    socket.join(room);
    );

    socket.on('alert', function (room)
    socket.in(room).emit('alert_dashboard');
    );

    socket.on('event', function(data)
    socket.in(data.room).emit('event_dashboard', data);
    );
    );


    Client



    var socket = io.connect('http://haproxy_server_ip:80');
    socket.on('connect', function()
    socket.emit('join', room id #);
    );









    share|improve this question


























      0












      0








      0








      I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



      With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



      What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



      Edit



      To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



      On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



      HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



      http://puu.sh/hvHmb/52177d1e0d.png



      I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



      --



      HAProxy.cfg



      global
      log 127.0.0.1 local0
      log 127.0.0.1 local1 notice
      #log loghost local0 info
      maxconn 80000
      #debug
      #quiet
      user haproxy
      group haproxy
      stats socket /tmp/haproxy.sock

      defaults
      log global
      mode http
      option httplog
      option dontlognull
      retries 3
      option redispatch
      maxconn 80000
      timeout client 60s # Client and server timeout must match the longest
      timeout server 60s # time we may wait for a response from the server.
      timeout queue 120s # Don't queue requests too long if saturated.
      timeout connect 10s # There's no reason to change this one.
      timeout http-request 30s # A complete request may never take that long.
      option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
      option abortonclose # enable early dropping of aborted requests from pending queue
      option httpchk # enable HTTP protocol to check on servers health
      stats auth strexm:OYk8834nkPOOaKstq48b
      stats uri /haproxy?stats

      # Set up application listeners here.
      listen application 0.0.0.0:80
      # configure a fake backend as long as there are no real ones
      # this way HAProxy will not fail on a config check
      balance source
      server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


      Server.js



      var express = require('express');
      var app = express();
      var server = require('http').Server(app);
      var io = require('socket.io')(server);
      var redis = require('socket.io-redis');
      io.adapter(redis( host: ###, port: 6379 ));

      server.listen(80); // opsworks node.js server requires port 80

      app.get('/', function (req, res)
      res.sendfile('./index.html');
      );

      io.sockets.on('connection', function (socket)
      socket.on('join', function(room)
      socket.join(room);
      );

      socket.on('alert', function (room)
      socket.in(room).emit('alert_dashboard');
      );

      socket.on('event', function(data)
      socket.in(data.room).emit('event_dashboard', data);
      );
      );


      Client



      var socket = io.connect('http://haproxy_server_ip:80');
      socket.on('connect', function()
      socket.emit('join', room id #);
      );









      share|improve this question
















      I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



      With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



      What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



      Edit



      To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



      On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



      HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



      http://puu.sh/hvHmb/52177d1e0d.png



      I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



      --



      HAProxy.cfg



      global
      log 127.0.0.1 local0
      log 127.0.0.1 local1 notice
      #log loghost local0 info
      maxconn 80000
      #debug
      #quiet
      user haproxy
      group haproxy
      stats socket /tmp/haproxy.sock

      defaults
      log global
      mode http
      option httplog
      option dontlognull
      retries 3
      option redispatch
      maxconn 80000
      timeout client 60s # Client and server timeout must match the longest
      timeout server 60s # time we may wait for a response from the server.
      timeout queue 120s # Don't queue requests too long if saturated.
      timeout connect 10s # There's no reason to change this one.
      timeout http-request 30s # A complete request may never take that long.
      option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
      option abortonclose # enable early dropping of aborted requests from pending queue
      option httpchk # enable HTTP protocol to check on servers health
      stats auth strexm:OYk8834nkPOOaKstq48b
      stats uri /haproxy?stats

      # Set up application listeners here.
      listen application 0.0.0.0:80
      # configure a fake backend as long as there are no real ones
      # this way HAProxy will not fail on a config check
      balance source
      server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


      Server.js



      var express = require('express');
      var app = express();
      var server = require('http').Server(app);
      var io = require('socket.io')(server);
      var redis = require('socket.io-redis');
      io.adapter(redis( host: ###, port: 6379 ));

      server.listen(80); // opsworks node.js server requires port 80

      app.get('/', function (req, res)
      res.sendfile('./index.html');
      );

      io.sockets.on('connection', function (socket)
      socket.on('join', function(room)
      socket.join(room);
      );

      socket.on('alert', function (room)
      socket.in(room).emit('alert_dashboard');
      );

      socket.on('event', function(data)
      socket.in(data.room).emit('event_dashboard', data);
      );
      );


      Client



      var socket = io.connect('http://haproxy_server_ip:80');
      socket.on('connect', function()
      socket.emit('join', room id #);
      );






      haproxy node.js socket session






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 29 '15 at 23:53







      J Young

















      asked Apr 29 '15 at 11:08









      J YoungJ Young

      10115




      10115




















          2 Answers
          2






          active

          oldest

          votes


















          0














          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer























          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12


















          0














          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer























          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f686407%2fhaproxy-max-session-limit%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer























          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12















          0














          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer























          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12













          0












          0








          0







          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer













          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 29 '15 at 18:14









          dtoubelisdtoubelis

          3,90612028




          3,90612028












          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12

















          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12
















          Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

          – J Young
          Apr 29 '15 at 23:12





          Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

          – J Young
          Apr 29 '15 at 23:12













          0














          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer























          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40















          0














          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer























          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40













          0












          0








          0







          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer













          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 29 '15 at 18:24









          ZypherZypher

          34.3k44492




          34.3k44492












          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40

















          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40
















          Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

          – J Young
          Apr 29 '15 at 23:40





          Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

          – J Young
          Apr 29 '15 at 23:40

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f686407%2fhaproxy-max-session-limit%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

          Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020