HAProxy max session limit Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Come Celebrate our 10 Year Anniversary!HaProxy + IIS pages gradually get slowerHaProxy - Http and SSL pass through configProxying websocket traffic from nginx 1.3.13 to socket.io (without SSL)Performance issues with Apache + Django + socket.io (long-polling) + ProxyHA-Proxy 301 re-direct: https to https://wwwHaProxy giving - 503 Service UnavailableHAProxy not logging all requestsSASL auth to LDAP behind HAPROXY with name mismatchesHow to secure HAProxy TCP stats socket? Needed for remote operationopenldap with haproxy - (ldap_result() failed: Can't contact LDAP server)

Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?

Did Kevin spill real chili?

When -s is used with third person singular. What's its use in this context?

The logistics of corpse disposal

Are my PIs rude or am I just being too sensitive?

Why is black pepper both grey and black?

How to bypass password on Windows XP account?

How to assign captions for two tables in LaTeX?

If a contract sometimes uses the wrong name, is it still valid?

Why are there no cargo aircraft with "flying wing" design?

Is there a service that would inform me whenever a new direct route is scheduled from a given airport?

Bonus calculation: Am I making a mountain out of a molehill?

If Jon Snow became King of the Seven Kingdoms what would his regnal number be?

What does the "x" in "x86" represent?

What would be the ideal power source for a cybernetic eye?

What causes the vertical darker bands in my photo?

Can a non-EU citizen traveling with me come with me through the EU passport line?

How can I make names more distinctive without making them longer?

Is it true that "carbohydrates are of no use for the basal metabolic need"?

Gastric acid as a weapon

If 'B is more likely given A', then 'A is more likely given B'

What are the pros and cons of Aerospike nosecones?

What do you call a plan that's an alternative plan in case your initial plan fails?

Is there a way in Ruby to make just any one out of many keyword arguments required?



HAProxy max session limit



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
Come Celebrate our 10 Year Anniversary!HaProxy + IIS pages gradually get slowerHaProxy - Http and SSL pass through configProxying websocket traffic from nginx 1.3.13 to socket.io (without SSL)Performance issues with Apache + Django + socket.io (long-polling) + ProxyHA-Proxy 301 re-direct: https to https://wwwHaProxy giving - 503 Service UnavailableHAProxy not logging all requestsSASL auth to LDAP behind HAPROXY with name mismatchesHow to secure HAProxy TCP stats socket? Needed for remote operationopenldap with haproxy - (ldap_result() failed: Can't contact LDAP server)



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



Edit



To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



http://puu.sh/hvHmb/52177d1e0d.png



I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



--



HAProxy.cfg



global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 80000
#debug
#quiet
user haproxy
group haproxy
stats socket /tmp/haproxy.sock

defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 80000
timeout client 60s # Client and server timeout must match the longest
timeout server 60s # time we may wait for a response from the server.
timeout queue 120s # Don't queue requests too long if saturated.
timeout connect 10s # There's no reason to change this one.
timeout http-request 30s # A complete request may never take that long.
option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
option abortonclose # enable early dropping of aborted requests from pending queue
option httpchk # enable HTTP protocol to check on servers health
stats auth strexm:OYk8834nkPOOaKstq48b
stats uri /haproxy?stats

# Set up application listeners here.
listen application 0.0.0.0:80
# configure a fake backend as long as there are no real ones
# this way HAProxy will not fail on a config check
balance source
server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


Server.js



var express = require('express');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis( host: ###, port: 6379 ));

server.listen(80); // opsworks node.js server requires port 80

app.get('/', function (req, res)
res.sendfile('./index.html');
);

io.sockets.on('connection', function (socket)
socket.on('join', function(room)
socket.join(room);
);

socket.on('alert', function (room)
socket.in(room).emit('alert_dashboard');
);

socket.on('event', function(data)
socket.in(data.room).emit('event_dashboard', data);
);
);


Client



var socket = io.connect('http://haproxy_server_ip:80');
socket.on('connect', function()
socket.emit('join', room id #);
);









share|improve this question






























    0















    I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



    With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



    What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



    Edit



    To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



    On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



    HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



    http://puu.sh/hvHmb/52177d1e0d.png



    I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



    --



    HAProxy.cfg



    global
    log 127.0.0.1 local0
    log 127.0.0.1 local1 notice
    #log loghost local0 info
    maxconn 80000
    #debug
    #quiet
    user haproxy
    group haproxy
    stats socket /tmp/haproxy.sock

    defaults
    log global
    mode http
    option httplog
    option dontlognull
    retries 3
    option redispatch
    maxconn 80000
    timeout client 60s # Client and server timeout must match the longest
    timeout server 60s # time we may wait for a response from the server.
    timeout queue 120s # Don't queue requests too long if saturated.
    timeout connect 10s # There's no reason to change this one.
    timeout http-request 30s # A complete request may never take that long.
    option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
    option abortonclose # enable early dropping of aborted requests from pending queue
    option httpchk # enable HTTP protocol to check on servers health
    stats auth strexm:OYk8834nkPOOaKstq48b
    stats uri /haproxy?stats

    # Set up application listeners here.
    listen application 0.0.0.0:80
    # configure a fake backend as long as there are no real ones
    # this way HAProxy will not fail on a config check
    balance source
    server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


    Server.js



    var express = require('express');
    var app = express();
    var server = require('http').Server(app);
    var io = require('socket.io')(server);
    var redis = require('socket.io-redis');
    io.adapter(redis( host: ###, port: 6379 ));

    server.listen(80); // opsworks node.js server requires port 80

    app.get('/', function (req, res)
    res.sendfile('./index.html');
    );

    io.sockets.on('connection', function (socket)
    socket.on('join', function(room)
    socket.join(room);
    );

    socket.on('alert', function (room)
    socket.in(room).emit('alert_dashboard');
    );

    socket.on('event', function(data)
    socket.in(data.room).emit('event_dashboard', data);
    );
    );


    Client



    var socket = io.connect('http://haproxy_server_ip:80');
    socket.on('connect', function()
    socket.emit('join', room id #);
    );









    share|improve this question


























      0












      0








      0








      I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



      With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



      What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



      Edit



      To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



      On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



      HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



      http://puu.sh/hvHmb/52177d1e0d.png



      I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



      --



      HAProxy.cfg



      global
      log 127.0.0.1 local0
      log 127.0.0.1 local1 notice
      #log loghost local0 info
      maxconn 80000
      #debug
      #quiet
      user haproxy
      group haproxy
      stats socket /tmp/haproxy.sock

      defaults
      log global
      mode http
      option httplog
      option dontlognull
      retries 3
      option redispatch
      maxconn 80000
      timeout client 60s # Client and server timeout must match the longest
      timeout server 60s # time we may wait for a response from the server.
      timeout queue 120s # Don't queue requests too long if saturated.
      timeout connect 10s # There's no reason to change this one.
      timeout http-request 30s # A complete request may never take that long.
      option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
      option abortonclose # enable early dropping of aborted requests from pending queue
      option httpchk # enable HTTP protocol to check on servers health
      stats auth strexm:OYk8834nkPOOaKstq48b
      stats uri /haproxy?stats

      # Set up application listeners here.
      listen application 0.0.0.0:80
      # configure a fake backend as long as there are no real ones
      # this way HAProxy will not fail on a config check
      balance source
      server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


      Server.js



      var express = require('express');
      var app = express();
      var server = require('http').Server(app);
      var io = require('socket.io')(server);
      var redis = require('socket.io-redis');
      io.adapter(redis( host: ###, port: 6379 ));

      server.listen(80); // opsworks node.js server requires port 80

      app.get('/', function (req, res)
      res.sendfile('./index.html');
      );

      io.sockets.on('connection', function (socket)
      socket.on('join', function(room)
      socket.join(room);
      );

      socket.on('alert', function (room)
      socket.in(room).emit('alert_dashboard');
      );

      socket.on('event', function(data)
      socket.in(data.room).emit('event_dashboard', data);
      );
      );


      Client



      var socket = io.connect('http://haproxy_server_ip:80');
      socket.on('connect', function()
      socket.emit('join', room id #);
      );









      share|improve this question
















      I have an Amazon OpsWorks stack running HAProxy (balance = source) and several node.js instances running socket.io. It seems HAProxy determines the max session limit for a given instance based on the memory limits of that instance, which is fine, but my application can often expect clients to be utilising two pages, both connected to a socket, for upwards of 4 hours.



      With the max session limit at 40 or 180, I'd only be able to have 20/60 concurrent clients until one disconnects. So with that said, if the limit is reached, other clients will be placed in a queue until a slot becomes free which given the nature of the site is likely to not be for some while. That means I only have a working site for a serious minority.



      What's the best way of getting around this? I read several posts where they'll have a backend of 4,000 - 30,000 and a max session limit of just 30 per each server, but how do they achieve this? Is there a setting in HAProxy, or is it more likely they continuously reconnect/disconnect the client either through their actual application?



      Edit



      To shed some more light on the application - it itself is a PHP application that makes use of sockets for real-time events. These sockets are done so through socket.io with that server being generated using express. This server.js file communicates with an Amazon ElastiCache Redis server (which to my understanding socket.io 1.0 handles all of this backend).



      On the client side of things, user's connect to the socket server and emit a connect event so that to join a room unique to them. A user will then load a second page and again emit a connect event and join that same unique room. This then allows for them to emit and receive various events over the course of their session - again, this session can last upwards of 4 hours.



      HAProxy routes the user to the same server based upon their IP hash (balance source) - the rest of the options are kept to the OpsWorks default - see the config file below.



      http://puu.sh/hvHmb/52177d1e0d.png



      I guess what I need to know is if Cur sessions hits 40, and these connections are long-lived (i.e. they don't get readily disconnected), what will happen to those in the queue? It's no good if they are left waiting 4 hours obviously.



      --



      HAProxy.cfg



      global
      log 127.0.0.1 local0
      log 127.0.0.1 local1 notice
      #log loghost local0 info
      maxconn 80000
      #debug
      #quiet
      user haproxy
      group haproxy
      stats socket /tmp/haproxy.sock

      defaults
      log global
      mode http
      option httplog
      option dontlognull
      retries 3
      option redispatch
      maxconn 80000
      timeout client 60s # Client and server timeout must match the longest
      timeout server 60s # time we may wait for a response from the server.
      timeout queue 120s # Don't queue requests too long if saturated.
      timeout connect 10s # There's no reason to change this one.
      timeout http-request 30s # A complete request may never take that long.
      option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
      option abortonclose # enable early dropping of aborted requests from pending queue
      option httpchk # enable HTTP protocol to check on servers health
      stats auth strexm:OYk8834nkPOOaKstq48b
      stats uri /haproxy?stats

      # Set up application listeners here.
      listen application 0.0.0.0:80
      # configure a fake backend as long as there are no real ones
      # this way HAProxy will not fail on a config check
      balance source
      server localhost 127.0.0.1:8080 weight 1 maxconn 5 check


      Server.js



      var express = require('express');
      var app = express();
      var server = require('http').Server(app);
      var io = require('socket.io')(server);
      var redis = require('socket.io-redis');
      io.adapter(redis( host: ###, port: 6379 ));

      server.listen(80); // opsworks node.js server requires port 80

      app.get('/', function (req, res)
      res.sendfile('./index.html');
      );

      io.sockets.on('connection', function (socket)
      socket.on('join', function(room)
      socket.join(room);
      );

      socket.on('alert', function (room)
      socket.in(room).emit('alert_dashboard');
      );

      socket.on('event', function(data)
      socket.in(data.room).emit('event_dashboard', data);
      );
      );


      Client



      var socket = io.connect('http://haproxy_server_ip:80');
      socket.on('connect', function()
      socket.emit('join', room id #);
      );






      haproxy node.js socket session






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 29 '15 at 23:53







      J Young

















      asked Apr 29 '15 at 11:08









      J YoungJ Young

      10115




      10115




















          2 Answers
          2






          active

          oldest

          votes


















          0














          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer























          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12


















          0














          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer























          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f686407%2fhaproxy-max-session-limit%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer























          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12















          0














          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer























          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12













          0












          0








          0







          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.






          share|improve this answer













          I don't think haproxy sets these limits. I have a suspicion that there may be limit on how many session from same the ip address are allowed, so if you are testing it from a single machine then that would likely be your problem. Haproxy indeed can easily handle 10th of thouthands connections.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 29 '15 at 18:14









          dtoubelisdtoubelis

          3,90612028




          3,90612028












          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12

















          • Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

            – J Young
            Apr 29 '15 at 23:12
















          Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

          – J Young
          Apr 29 '15 at 23:12





          Ah, that's possible. I did test it by opening 40 windows myself and then having several others connect - they were placed in the queue until it came to a point where I had less than 40 windows. It's probably not the most elegant way to test admittedly, I guess it's time to look into a httperf or something.

          – J Young
          Apr 29 '15 at 23:12













          0














          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer























          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40















          0














          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer























          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40













          0












          0








          0







          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.






          share|improve this answer













          Without seeing any configs that you are talking about and what sites you are talking about I have to guess they are achieving the number of sessions on a standard web app with http-server-close. So in that case you have a large number of short lived connections.



          A better example of what you are doing is web sockets, which are very long lived connections. For the SE network we have our maxconns for our web socket tear set to 500,000.



          What you really want to do is figure out how many concurrent connections you want to be able to support at any given time, and set your maxconn value to that. Of course you'll have to make sure that you have enough resources on your load balancer to support the number you select.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 29 '15 at 18:24









          ZypherZypher

          34.3k44492




          34.3k44492












          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40

















          • Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

            – J Young
            Apr 29 '15 at 23:40
















          Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

          – J Young
          Apr 29 '15 at 23:40





          Right OK. OpsWorks sets it to 8000 by default (which at this early stage is ample), but just to clarify, despite each user having a long lived connection, as long as they are not emitting or receiving, will they effectively become a ghost user in the sense that there is the allowance for other users to still receive and emit? I've updated my main post to shed a little more light on the nature of the application. I appreciate your help!

          – J Young
          Apr 29 '15 at 23:40

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f686407%2fhaproxy-max-session-limit%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

          Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

          What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company