Server crash (504 gateway timeout) with 100 concurrent users, using nginx and php5-fpm The 2019 Stack Overflow Developer Survey Results Are InHelp needed setting up nginx to serve static filesBlank Page: wordpress on nginx+php-fpmConfigure php5-fpm for many concurrent usersNginx gives 504 Gateway Time-out once moved to liveNGINX don't parse .php5 as .phpNginX + WordPress + SSL + non-www + W3TC vhost config file questionsChange Nginx document root from /usr/share/nginx to /etc/nginx403 Forbidden nginx (nginx/1.8.0)CodeIgniter nginx rewrite rules for i8ln URL'sNGINX virtual host config for Magento2 in a subfolder
Match Roman Numerals
How come people say “Would of”?
How can I define good in a religion that claims no moral authority?
Loose spokes after only a few rides
Falsification in Math vs Science
Did Scotland spend $250,000 for the slogan "Welcome to Scotland"?
Geography at the pixel level
What is the meaning of Triage in Cybersec world?
I am an eight letter word. What am I?
Output the Arecibo Message
How to type a long/em dash `—`
What is preventing me from simply constructing a hash that's lower than the current target?
What do I do when my TA workload is more than expected?
Can you cast a spell on someone in the Ethereal Plane, if you are on the Material Plane and have the True Seeing spell active?
How can I add encounters in the Lost Mine of Phandelver campaign without giving PCs too much XP?
Is an up-to-date browser secure on an out-of-date OS?
Mathematics of imaging the black hole
Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past
APIPA and LAN Broadcast Domain
writing variables above the numbers in tikz picture
How to display lines in a file like ls displays files in a directory?
Inverse Relationship Between Precision and Recall
Straighten subgroup lattice
Correct punctuation for showing a character's confusion
Server crash (504 gateway timeout) with 100 concurrent users, using nginx and php5-fpm
The 2019 Stack Overflow Developer Survey Results Are InHelp needed setting up nginx to serve static filesBlank Page: wordpress on nginx+php-fpmConfigure php5-fpm for many concurrent usersNginx gives 504 Gateway Time-out once moved to liveNGINX don't parse .php5 as .phpNginX + WordPress + SSL + non-www + W3TC vhost config file questionsChange Nginx document root from /usr/share/nginx to /etc/nginx403 Forbidden nginx (nginx/1.8.0)CodeIgniter nginx rewrite rules for i8ln URL'sNGINX virtual host config for Magento2 in a subfolder
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
We have a VPS server which is dedicated to a single website. Day to day it seems to work fine (say 20-50 concurrent users) but as soon as we get up to around 90+ concurrent users, the server starts to crash / timeout. It will start to show nginx's 504 Gateway Time-out error.
We had some issues earlier in the year where it was taking about 7 seconds to load some data-heavy pages, which we managed to resolve 90% by optimising mysql queries and making use of myqsl cache. However it doesn't seem to be helping with this!
When I say data heavy, it is loading approx 5000 records from the DB, through the framework.
The server is running Ubuntu 15.10, with 4 CPU's and 4GB memory. Mysql is on its own server with 1GB memory. The mysql server doesn't seem to get past about 30% utilisation, even with 100 users.
Mysql is configured to have a 64mb query_cache_size
and 6mb query_cache_limit
We have APC installed but doesn't seem to make much difference overall
This is our nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events
worker_connections 1024;
# multi_accept on;
http
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
#client_body_buffer_size 10K;
#client_header_buffer_size 1k;
client_max_body_size 12m;
#large_client_header_buffers 2 1k;
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=10m max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
#access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_comp_level 3;
gzip_vary on;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
This is the server block
server
listen 80 default;
server_name www.website.com;
root /var/www/website.com/httpdocs;
index index.php index.html index.htm;
location /
try_files $uri @handler;
error_page 404 /assets/error-404.html;
error_page 500 /assets/error-500.html;
location @handler
expires off;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi caching
#Cache everything by default
set $no_cache 0;
if ($request_method !~ ^(GET
This is pool.d/www.conf details
pm = dynamic
pm.max_children = 30
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 4
pm.max_requests = 500
PHP is set to have 128mb memory, however each process is usually around ~70mb
I didn't manage to get a top while it was at 100 users, but this is the usual state:
total used free shared buffers cached
Mem: 3951 3793 157 114 273 2918
-/+ buffers/cache: 602 3348
Swap: 0 0 0
You'll see I did some experimenting with nginx's fastcgi_cache, which made a huge difference to performance (load time of 50 - 100ms) however the website has a lot of user functionality (uploads, modifying etc) which didn't work with it enabled.
I would like to re-look at fastcgi_cache but I feel that we must be able to get a better result on this current server without it?!
Been battling this one for a while now so any help would be great.
ubuntu nginx mysql php-fpm timeout
add a comment |
We have a VPS server which is dedicated to a single website. Day to day it seems to work fine (say 20-50 concurrent users) but as soon as we get up to around 90+ concurrent users, the server starts to crash / timeout. It will start to show nginx's 504 Gateway Time-out error.
We had some issues earlier in the year where it was taking about 7 seconds to load some data-heavy pages, which we managed to resolve 90% by optimising mysql queries and making use of myqsl cache. However it doesn't seem to be helping with this!
When I say data heavy, it is loading approx 5000 records from the DB, through the framework.
The server is running Ubuntu 15.10, with 4 CPU's and 4GB memory. Mysql is on its own server with 1GB memory. The mysql server doesn't seem to get past about 30% utilisation, even with 100 users.
Mysql is configured to have a 64mb query_cache_size
and 6mb query_cache_limit
We have APC installed but doesn't seem to make much difference overall
This is our nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events
worker_connections 1024;
# multi_accept on;
http
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
#client_body_buffer_size 10K;
#client_header_buffer_size 1k;
client_max_body_size 12m;
#large_client_header_buffers 2 1k;
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=10m max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
#access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_comp_level 3;
gzip_vary on;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
This is the server block
server
listen 80 default;
server_name www.website.com;
root /var/www/website.com/httpdocs;
index index.php index.html index.htm;
location /
try_files $uri @handler;
error_page 404 /assets/error-404.html;
error_page 500 /assets/error-500.html;
location @handler
expires off;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi caching
#Cache everything by default
set $no_cache 0;
if ($request_method !~ ^(GET
This is pool.d/www.conf details
pm = dynamic
pm.max_children = 30
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 4
pm.max_requests = 500
PHP is set to have 128mb memory, however each process is usually around ~70mb
I didn't manage to get a top while it was at 100 users, but this is the usual state:
total used free shared buffers cached
Mem: 3951 3793 157 114 273 2918
-/+ buffers/cache: 602 3348
Swap: 0 0 0
You'll see I did some experimenting with nginx's fastcgi_cache, which made a huge difference to performance (load time of 50 - 100ms) however the website has a lot of user functionality (uploads, modifying etc) which didn't work with it enabled.
I would like to re-look at fastcgi_cache but I feel that we must be able to get a better result on this current server without it?!
Been battling this one for a while now so any help would be great.
ubuntu nginx mysql php-fpm timeout
add a comment |
We have a VPS server which is dedicated to a single website. Day to day it seems to work fine (say 20-50 concurrent users) but as soon as we get up to around 90+ concurrent users, the server starts to crash / timeout. It will start to show nginx's 504 Gateway Time-out error.
We had some issues earlier in the year where it was taking about 7 seconds to load some data-heavy pages, which we managed to resolve 90% by optimising mysql queries and making use of myqsl cache. However it doesn't seem to be helping with this!
When I say data heavy, it is loading approx 5000 records from the DB, through the framework.
The server is running Ubuntu 15.10, with 4 CPU's and 4GB memory. Mysql is on its own server with 1GB memory. The mysql server doesn't seem to get past about 30% utilisation, even with 100 users.
Mysql is configured to have a 64mb query_cache_size
and 6mb query_cache_limit
We have APC installed but doesn't seem to make much difference overall
This is our nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events
worker_connections 1024;
# multi_accept on;
http
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
#client_body_buffer_size 10K;
#client_header_buffer_size 1k;
client_max_body_size 12m;
#large_client_header_buffers 2 1k;
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=10m max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
#access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_comp_level 3;
gzip_vary on;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
This is the server block
server
listen 80 default;
server_name www.website.com;
root /var/www/website.com/httpdocs;
index index.php index.html index.htm;
location /
try_files $uri @handler;
error_page 404 /assets/error-404.html;
error_page 500 /assets/error-500.html;
location @handler
expires off;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi caching
#Cache everything by default
set $no_cache 0;
if ($request_method !~ ^(GET
This is pool.d/www.conf details
pm = dynamic
pm.max_children = 30
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 4
pm.max_requests = 500
PHP is set to have 128mb memory, however each process is usually around ~70mb
I didn't manage to get a top while it was at 100 users, but this is the usual state:
total used free shared buffers cached
Mem: 3951 3793 157 114 273 2918
-/+ buffers/cache: 602 3348
Swap: 0 0 0
You'll see I did some experimenting with nginx's fastcgi_cache, which made a huge difference to performance (load time of 50 - 100ms) however the website has a lot of user functionality (uploads, modifying etc) which didn't work with it enabled.
I would like to re-look at fastcgi_cache but I feel that we must be able to get a better result on this current server without it?!
Been battling this one for a while now so any help would be great.
ubuntu nginx mysql php-fpm timeout
We have a VPS server which is dedicated to a single website. Day to day it seems to work fine (say 20-50 concurrent users) but as soon as we get up to around 90+ concurrent users, the server starts to crash / timeout. It will start to show nginx's 504 Gateway Time-out error.
We had some issues earlier in the year where it was taking about 7 seconds to load some data-heavy pages, which we managed to resolve 90% by optimising mysql queries and making use of myqsl cache. However it doesn't seem to be helping with this!
When I say data heavy, it is loading approx 5000 records from the DB, through the framework.
The server is running Ubuntu 15.10, with 4 CPU's and 4GB memory. Mysql is on its own server with 1GB memory. The mysql server doesn't seem to get past about 30% utilisation, even with 100 users.
Mysql is configured to have a 64mb query_cache_size
and 6mb query_cache_limit
We have APC installed but doesn't seem to make much difference overall
This is our nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events
worker_connections 1024;
# multi_accept on;
http
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
#client_body_buffer_size 10K;
#client_header_buffer_size 1k;
client_max_body_size 12m;
#large_client_header_buffers 2 1k;
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=10m max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
#access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_comp_level 3;
gzip_vary on;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
This is the server block
server
listen 80 default;
server_name www.website.com;
root /var/www/website.com/httpdocs;
index index.php index.html index.htm;
location /
try_files $uri @handler;
error_page 404 /assets/error-404.html;
error_page 500 /assets/error-500.html;
location @handler
expires off;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi caching
#Cache everything by default
set $no_cache 0;
if ($request_method !~ ^(GET
This is pool.d/www.conf details
pm = dynamic
pm.max_children = 30
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 4
pm.max_requests = 500
PHP is set to have 128mb memory, however each process is usually around ~70mb
I didn't manage to get a top while it was at 100 users, but this is the usual state:
total used free shared buffers cached
Mem: 3951 3793 157 114 273 2918
-/+ buffers/cache: 602 3348
Swap: 0 0 0
You'll see I did some experimenting with nginx's fastcgi_cache, which made a huge difference to performance (load time of 50 - 100ms) however the website has a lot of user functionality (uploads, modifying etc) which didn't work with it enabled.
I would like to re-look at fastcgi_cache but I feel that we must be able to get a better result on this current server without it?!
Been battling this one for a while now so any help would be great.
ubuntu nginx mysql php-fpm timeout
ubuntu nginx mysql php-fpm timeout
asked Jun 15 '16 at 1:47
OnfireOnfire
13
13
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
You have set up pm.max_children
to 30, which means that there can be only 30 concurrent PHP scripts running at the same time.
When more users visit your sites, there aren't any free PHP processes to serve the request. nginx
waits for some time, before returning the 504 Gateway Time-out
error.
You seem to have plenty of free memory, as your cached
column shows 2.9 GB of free memory.
You should check the average memory usage of your PHP processes with top
command. The memory usage we are interested in is the RES
column. Divide 2GB with that number, and you'll get a safe number for the pm.max_children
setting.
You should also consider raising the value for pm.start_servers
, pm.min_spare_servers
and pm.max_spare_servers
.
Spare servers are processes that are available to serve requests immediately. Otherwise the PHP process manager needs to launch a process separately, which takes some time.
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
One PHP-FPM process can handle a single user at a time. So,max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.
– Tero Kilkanen
Jun 19 '16 at 12:18
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f784008%2fserver-crash-504-gateway-timeout-with-100-concurrent-users-using-nginx-and-ph%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You have set up pm.max_children
to 30, which means that there can be only 30 concurrent PHP scripts running at the same time.
When more users visit your sites, there aren't any free PHP processes to serve the request. nginx
waits for some time, before returning the 504 Gateway Time-out
error.
You seem to have plenty of free memory, as your cached
column shows 2.9 GB of free memory.
You should check the average memory usage of your PHP processes with top
command. The memory usage we are interested in is the RES
column. Divide 2GB with that number, and you'll get a safe number for the pm.max_children
setting.
You should also consider raising the value for pm.start_servers
, pm.min_spare_servers
and pm.max_spare_servers
.
Spare servers are processes that are available to serve requests immediately. Otherwise the PHP process manager needs to launch a process separately, which takes some time.
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
One PHP-FPM process can handle a single user at a time. So,max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.
– Tero Kilkanen
Jun 19 '16 at 12:18
add a comment |
You have set up pm.max_children
to 30, which means that there can be only 30 concurrent PHP scripts running at the same time.
When more users visit your sites, there aren't any free PHP processes to serve the request. nginx
waits for some time, before returning the 504 Gateway Time-out
error.
You seem to have plenty of free memory, as your cached
column shows 2.9 GB of free memory.
You should check the average memory usage of your PHP processes with top
command. The memory usage we are interested in is the RES
column. Divide 2GB with that number, and you'll get a safe number for the pm.max_children
setting.
You should also consider raising the value for pm.start_servers
, pm.min_spare_servers
and pm.max_spare_servers
.
Spare servers are processes that are available to serve requests immediately. Otherwise the PHP process manager needs to launch a process separately, which takes some time.
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
One PHP-FPM process can handle a single user at a time. So,max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.
– Tero Kilkanen
Jun 19 '16 at 12:18
add a comment |
You have set up pm.max_children
to 30, which means that there can be only 30 concurrent PHP scripts running at the same time.
When more users visit your sites, there aren't any free PHP processes to serve the request. nginx
waits for some time, before returning the 504 Gateway Time-out
error.
You seem to have plenty of free memory, as your cached
column shows 2.9 GB of free memory.
You should check the average memory usage of your PHP processes with top
command. The memory usage we are interested in is the RES
column. Divide 2GB with that number, and you'll get a safe number for the pm.max_children
setting.
You should also consider raising the value for pm.start_servers
, pm.min_spare_servers
and pm.max_spare_servers
.
Spare servers are processes that are available to serve requests immediately. Otherwise the PHP process manager needs to launch a process separately, which takes some time.
You have set up pm.max_children
to 30, which means that there can be only 30 concurrent PHP scripts running at the same time.
When more users visit your sites, there aren't any free PHP processes to serve the request. nginx
waits for some time, before returning the 504 Gateway Time-out
error.
You seem to have plenty of free memory, as your cached
column shows 2.9 GB of free memory.
You should check the average memory usage of your PHP processes with top
command. The memory usage we are interested in is the RES
column. Divide 2GB with that number, and you'll get a safe number for the pm.max_children
setting.
You should also consider raising the value for pm.start_servers
, pm.min_spare_servers
and pm.max_spare_servers
.
Spare servers are processes that are available to serve requests immediately. Otherwise the PHP process manager needs to launch a process separately, which takes some time.
answered Jun 18 '16 at 12:05
Tero KilkanenTero Kilkanen
20.6k22644
20.6k22644
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
One PHP-FPM process can handle a single user at a time. So,max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.
– Tero Kilkanen
Jun 19 '16 at 12:18
add a comment |
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
One PHP-FPM process can handle a single user at a time. So,max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.
– Tero Kilkanen
Jun 19 '16 at 12:18
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:
pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
I was under the impression that one PHP-FPM process could handle multiple users/files, is that not correct? If a single process can only handle one user, do we simply just need more memory for more people? I checked top which gives me about 60-70mb per PHP process, which equates to about 40 servers if I allow a bit of memory free, so I will try this (this is how I originally got to 30 servers, allowing for plenty of room). Now my settings are:
pm.max_children=40
pm.start_servers=6
pm.min_spare_servers=6
pm.max_spare_servers=10
– Onfire
Jun 18 '16 at 22:32
One PHP-FPM process can handle a single user at a time. So,
max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.– Tero Kilkanen
Jun 19 '16 at 12:18
One PHP-FPM process can handle a single user at a time. So,
max_children
users can execute a single PHP script at a time. It depends on the duration of the execution how many users a second one can handle.– Tero Kilkanen
Jun 19 '16 at 12:18
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f784008%2fserver-crash-504-gateway-timeout-with-100-concurrent-users-using-nginx-and-ph%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown