Apache Tuning for 512GB RAM Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara Come Celebrate our 10 Year Anniversary!MySQL DB causing high IO running Zabbix serverunable to kill and exit a large insert query properlyImprove server performance by configuring MySQLOptimizing Web Server(apache2+nginx+memcached+eaccelerator) for high-loaded heavy applicationRun OPTIMIZE TABLE to defragment tables for better performanceMySQL Execution Time SpikesCentos MySQL optimization 4gb ramMySQL Tuning — High Memory UsageSome questions on configuring mysql, maybe apache, maybe CentOS 6 too - InnoDB engine fails to loadApache ab - testing with 1000 concurrency

Passing args from the bash script to the function in the script

What is it called when you ride around on your front wheel?

Are these square matrices always diagonalisable?

Is accepting an invalid credit card number a security issue?

A Paper Record is What I Hamper

Does Mathematica have an implementation of the Poisson binomial distribution?

How to open locks without disable device?

How can I wire a 9-position switch so that each position turns on one more LED than the one before?

Why does the Cisco show run command not show the full version, while the show version command does?

As an international instructor, should I openly talk about my accent?

"My boss was furious with me and I have been fired" vs. "My boss was furious with me and I was fired"

Additive group of local rings

Putting Ant-Man on house arrest

What's parked in Mil Moscow helicopter plant?

Raising a bilingual kid. When should we introduce the majority language?

What ability score does a Hexblade's Pact Weapon use for attack and damage when wielded by another character?

std::is_constructible on incomplete types

What’s with the clanks in Endgame?

Is Electric Central Heating worth it if using Solar Panels?

What is the term for a person whose job is to place products on shelves in stores?

Rolling Stones Sway guitar solo chord function

All ASCII characters with a given bit count

Is a 5 watt UHF/VHF handheld considered QRP?

Are all CP/M-80 implementations binary compatible?



Apache Tuning for 512GB RAM



Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
Come Celebrate our 10 Year Anniversary!MySQL DB causing high IO running Zabbix serverunable to kill and exit a large insert query properlyImprove server performance by configuring MySQLOptimizing Web Server(apache2+nginx+memcached+eaccelerator) for high-loaded heavy applicationRun OPTIMIZE TABLE to defragment tables for better performanceMySQL Execution Time SpikesCentos MySQL optimization 4gb ramMySQL Tuning — High Memory UsageSome questions on configuring mysql, maybe apache, maybe CentOS 6 too - InnoDB engine fails to loadApache ab - testing with 1000 concurrency



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















Use Case:



API Server Cluster Optimized for JSON Data Submission



I have a distributed application sending info to API servers that are slaves of a MySQL 8 master. Applications do full initial sync (~100,000 records in batches of 500) followed by incremental syncs every 5 minutes.



I have 3 Dell R620 servers with 512GB RAM, 5 SSD in RAID 6 that are acting as web servers. I have dedicated one to being a master MySQL using the following config:



[mysqld]
server-id=1
# GENERAL #
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql/
bind-address=*

# GENERAL #
user = mysql
default-storage-engine = InnoDB

# MyISAM #
key-buffer-size = 32M
myisam-recover-options = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
skip-name-resolve
#skip-grant-tables

# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
binlog_expire_logs_seconds = 2000
sync-binlog = 1

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 10000
table-definition-cache = 4096
table-open-cache = 4096
#
## INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 360G
#
## LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
#

## REPLICATION ##
slave-parallel-workers=10
slave-parallel-type = LOGICAL_CLOCK
innodb-flush-log-at-timeout=1800

[mysql]

# CLIENT #
port = 3306


On the other servers that host the API, the goal is for them to do select queries on the local slave server, and write changes back to the master which will allow us to have additional resources dedicated to receiving incoming API calls. Because they are primarily for Apache / PHP, I have reduced innodb-buffer-pool-size = 64G.



What optimizations should I use for Apache and PHP for high RAM servers?



I set this up, but not sure if I am under utilizing available resources:



<IfModule mpm_prefork_module>
StartServers 200
MinSpareServers 20
MaxSpareServers 50
MaxRequestWorkers 100
MaxConnectionsPerChild 0
ServerLimit 512
MaxClients 512
MaxRequestsPerChild 10000
</IfModule>


A more complete overview of my settings including variables, status, mysqltuner.pl report can be found here: http://plnkr.co/edit/eeGHzFX95j5auJ5lTYum?p=catalogue



Updates



We are receiving about 5600 requests per hour right now, about 70% may have up to 500 records per request that needs an update or insert query. That adds up to around 550 queries per second. Server load is commonly between 2.5-4.



The website was written in Laravel 5.4 and we tested throughput to the normal API routes using Laravel, Eloquent, and so on and when using Apache Benchmark using the following: ab -c 100 -n 2000 -p sample.json -T application/json -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiO" https://www.myserver.com/api/accounting



Here are the results:



Benchmarking www.myserver.com (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software: Apache/2.4.29
Server Hostname: www.myserver.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-CHACHA20-POLY1305,2048,256
TLS Server Name: www.myserver.com

Document Path: /api/accounting
Document Length: 65 bytes

Concurrency Level: 100
Time taken for tests: 375.487 seconds
Complete requests: 2000
Failed requests: 1134
(Connect: 0, Receive: 0, Length: 1134, Exceptions: 0)
Total transferred: 735018 bytes
Total body sent: 162864000
HTML transferred: 131018 bytes
Requests per second: 5.33 [#/sec] (mean)
Time per request: 18774.370 [ms] (mean)
Time per request: 187.744 [ms] (mean, across all concurrent requests)
Transfer rate: 1.91 [Kbytes/sec] received
423.57 kb/s sent
425.49 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 315 1554.1 5 11497
Processing: 8420 18299 2501.9 18658 24051
Waiting: 8419 18298 2501.9 18658 24050
Total: 8424 18614 2791.2 18792 30388

Percentage of the requests served within a certain time (ms)
50% 18792
66% 19699
75% 20247
80% 20619
90% 21560
95% 22343
98% 23933
99% 27099
100% 30388 (longest request)


sample.json contained 500 records and we had the server load hit a load of 103. You will also notice we had over half our posts fail.



enter image description here



It seems apache is our bottleneck, and as I dug into it using get_included_files() I found that Laravel uses 275 includes just to get to the routes.php file, by the time it starts posting to our API, it uses 462, and by the end of posting to the API it uses 575 included files.



We rebuilt the same function outside Laravel using a single PHP page that defined the PDO connection, looped over the data queries in the same way generating queries for inserts and updates and it completed the same task with these stats:



Concurrency Level: 100
Time taken for tests: 16.367 seconds
Complete requests: 2000
Failed requests: 228
(Connect: 0, Receive: 0, Length: 228, Exceptions: 0)
Total transferred: 502228 bytes
Total body sent: 162804000
HTML transferred: 126228 bytes
Requests per second: 122.19 [#/sec] (mean)
Time per request: 818.366 [ms] (mean)
Time per request: 8.184 [ms] (mean, across all concurrent requests)
Transfer rate: 29.97 [Kbytes/sec] received
9713.76 kb/s sent
9743.73 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 9 14.7 6 98
Processing: 242 800 281.3 764 2187
Waiting: 241 799 281.3 764 2187
Total: 246 809 283.8 774 2195

Percentage of the requests served within a certain time (ms)
50% 774
66% 905
75% 986
80% 1040
90% 1201
95% 1328
98% 1493
99% 1618
100% 2195 (longest request)


Server load only hit 12 while posting these with 0 failed posts. Because of the significant improvement, we are looking at pulling the API code out of Laraverl and optimizing one server for Mysql, and then having multiple slaves. Each slave would have read only access to localhost for the API to query to determine whether each record should be an update or insert statement, then would execute queries on the MySQL master.



While I have looked around for answers, so many resources were written when 4GB-32GB RAM was normal, and when you do find one with 512GB, it usually refers to an SSD.










share|improve this question



















  • 1





    Tuning almost always should include measuring/logging and then a bit of trial and error with some good testing. The popular solution these days is to scaling out, not scaling to large 512GB systems. So I suspect you may on your own a bit here.

    – Zoredache
    Mar 20 at 0:06







  • 1





    @Zoredache I found these servers at a computer recycler (they get tens of thousands of servers per year as corporations upgrade) and got a killer deal on them ($1500 each). So while I realize we would normally put in more servers and scale laterally, I would love to know how to get the most out of these before I add on more. Michael, I updated the question with more details.

    – Alan
    Mar 20 at 17:09






  • 1





    Thanks, the master has dual 6 core E5-2667 v1 @2.9Ghz, the slaves have dual 4 core E5-2643 v2 @ 3.50GHz. I am using default settings in the BIOS

    – Alan
    Mar 20 at 18:24







  • 1





    I should add that at this time we are a startup with only about 50 customers, we are envisioning having thousands but have waited to do any advertising or sales before we have things as optimized as they can be.

    – Alan
    Mar 20 at 18:42






  • 1





    What OS and PHP version are you using? Have you configured and enabled opcache?

    – shodanshok
    Mar 20 at 20:32

















1















Use Case:



API Server Cluster Optimized for JSON Data Submission



I have a distributed application sending info to API servers that are slaves of a MySQL 8 master. Applications do full initial sync (~100,000 records in batches of 500) followed by incremental syncs every 5 minutes.



I have 3 Dell R620 servers with 512GB RAM, 5 SSD in RAID 6 that are acting as web servers. I have dedicated one to being a master MySQL using the following config:



[mysqld]
server-id=1
# GENERAL #
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql/
bind-address=*

# GENERAL #
user = mysql
default-storage-engine = InnoDB

# MyISAM #
key-buffer-size = 32M
myisam-recover-options = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
skip-name-resolve
#skip-grant-tables

# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
binlog_expire_logs_seconds = 2000
sync-binlog = 1

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 10000
table-definition-cache = 4096
table-open-cache = 4096
#
## INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 360G
#
## LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
#

## REPLICATION ##
slave-parallel-workers=10
slave-parallel-type = LOGICAL_CLOCK
innodb-flush-log-at-timeout=1800

[mysql]

# CLIENT #
port = 3306


On the other servers that host the API, the goal is for them to do select queries on the local slave server, and write changes back to the master which will allow us to have additional resources dedicated to receiving incoming API calls. Because they are primarily for Apache / PHP, I have reduced innodb-buffer-pool-size = 64G.



What optimizations should I use for Apache and PHP for high RAM servers?



I set this up, but not sure if I am under utilizing available resources:



<IfModule mpm_prefork_module>
StartServers 200
MinSpareServers 20
MaxSpareServers 50
MaxRequestWorkers 100
MaxConnectionsPerChild 0
ServerLimit 512
MaxClients 512
MaxRequestsPerChild 10000
</IfModule>


A more complete overview of my settings including variables, status, mysqltuner.pl report can be found here: http://plnkr.co/edit/eeGHzFX95j5auJ5lTYum?p=catalogue



Updates



We are receiving about 5600 requests per hour right now, about 70% may have up to 500 records per request that needs an update or insert query. That adds up to around 550 queries per second. Server load is commonly between 2.5-4.



The website was written in Laravel 5.4 and we tested throughput to the normal API routes using Laravel, Eloquent, and so on and when using Apache Benchmark using the following: ab -c 100 -n 2000 -p sample.json -T application/json -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiO" https://www.myserver.com/api/accounting



Here are the results:



Benchmarking www.myserver.com (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software: Apache/2.4.29
Server Hostname: www.myserver.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-CHACHA20-POLY1305,2048,256
TLS Server Name: www.myserver.com

Document Path: /api/accounting
Document Length: 65 bytes

Concurrency Level: 100
Time taken for tests: 375.487 seconds
Complete requests: 2000
Failed requests: 1134
(Connect: 0, Receive: 0, Length: 1134, Exceptions: 0)
Total transferred: 735018 bytes
Total body sent: 162864000
HTML transferred: 131018 bytes
Requests per second: 5.33 [#/sec] (mean)
Time per request: 18774.370 [ms] (mean)
Time per request: 187.744 [ms] (mean, across all concurrent requests)
Transfer rate: 1.91 [Kbytes/sec] received
423.57 kb/s sent
425.49 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 315 1554.1 5 11497
Processing: 8420 18299 2501.9 18658 24051
Waiting: 8419 18298 2501.9 18658 24050
Total: 8424 18614 2791.2 18792 30388

Percentage of the requests served within a certain time (ms)
50% 18792
66% 19699
75% 20247
80% 20619
90% 21560
95% 22343
98% 23933
99% 27099
100% 30388 (longest request)


sample.json contained 500 records and we had the server load hit a load of 103. You will also notice we had over half our posts fail.



enter image description here



It seems apache is our bottleneck, and as I dug into it using get_included_files() I found that Laravel uses 275 includes just to get to the routes.php file, by the time it starts posting to our API, it uses 462, and by the end of posting to the API it uses 575 included files.



We rebuilt the same function outside Laravel using a single PHP page that defined the PDO connection, looped over the data queries in the same way generating queries for inserts and updates and it completed the same task with these stats:



Concurrency Level: 100
Time taken for tests: 16.367 seconds
Complete requests: 2000
Failed requests: 228
(Connect: 0, Receive: 0, Length: 228, Exceptions: 0)
Total transferred: 502228 bytes
Total body sent: 162804000
HTML transferred: 126228 bytes
Requests per second: 122.19 [#/sec] (mean)
Time per request: 818.366 [ms] (mean)
Time per request: 8.184 [ms] (mean, across all concurrent requests)
Transfer rate: 29.97 [Kbytes/sec] received
9713.76 kb/s sent
9743.73 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 9 14.7 6 98
Processing: 242 800 281.3 764 2187
Waiting: 241 799 281.3 764 2187
Total: 246 809 283.8 774 2195

Percentage of the requests served within a certain time (ms)
50% 774
66% 905
75% 986
80% 1040
90% 1201
95% 1328
98% 1493
99% 1618
100% 2195 (longest request)


Server load only hit 12 while posting these with 0 failed posts. Because of the significant improvement, we are looking at pulling the API code out of Laraverl and optimizing one server for Mysql, and then having multiple slaves. Each slave would have read only access to localhost for the API to query to determine whether each record should be an update or insert statement, then would execute queries on the MySQL master.



While I have looked around for answers, so many resources were written when 4GB-32GB RAM was normal, and when you do find one with 512GB, it usually refers to an SSD.










share|improve this question



















  • 1





    Tuning almost always should include measuring/logging and then a bit of trial and error with some good testing. The popular solution these days is to scaling out, not scaling to large 512GB systems. So I suspect you may on your own a bit here.

    – Zoredache
    Mar 20 at 0:06







  • 1





    @Zoredache I found these servers at a computer recycler (they get tens of thousands of servers per year as corporations upgrade) and got a killer deal on them ($1500 each). So while I realize we would normally put in more servers and scale laterally, I would love to know how to get the most out of these before I add on more. Michael, I updated the question with more details.

    – Alan
    Mar 20 at 17:09






  • 1





    Thanks, the master has dual 6 core E5-2667 v1 @2.9Ghz, the slaves have dual 4 core E5-2643 v2 @ 3.50GHz. I am using default settings in the BIOS

    – Alan
    Mar 20 at 18:24







  • 1





    I should add that at this time we are a startup with only about 50 customers, we are envisioning having thousands but have waited to do any advertising or sales before we have things as optimized as they can be.

    – Alan
    Mar 20 at 18:42






  • 1





    What OS and PHP version are you using? Have you configured and enabled opcache?

    – shodanshok
    Mar 20 at 20:32













1












1








1








Use Case:



API Server Cluster Optimized for JSON Data Submission



I have a distributed application sending info to API servers that are slaves of a MySQL 8 master. Applications do full initial sync (~100,000 records in batches of 500) followed by incremental syncs every 5 minutes.



I have 3 Dell R620 servers with 512GB RAM, 5 SSD in RAID 6 that are acting as web servers. I have dedicated one to being a master MySQL using the following config:



[mysqld]
server-id=1
# GENERAL #
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql/
bind-address=*

# GENERAL #
user = mysql
default-storage-engine = InnoDB

# MyISAM #
key-buffer-size = 32M
myisam-recover-options = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
skip-name-resolve
#skip-grant-tables

# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
binlog_expire_logs_seconds = 2000
sync-binlog = 1

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 10000
table-definition-cache = 4096
table-open-cache = 4096
#
## INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 360G
#
## LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
#

## REPLICATION ##
slave-parallel-workers=10
slave-parallel-type = LOGICAL_CLOCK
innodb-flush-log-at-timeout=1800

[mysql]

# CLIENT #
port = 3306


On the other servers that host the API, the goal is for them to do select queries on the local slave server, and write changes back to the master which will allow us to have additional resources dedicated to receiving incoming API calls. Because they are primarily for Apache / PHP, I have reduced innodb-buffer-pool-size = 64G.



What optimizations should I use for Apache and PHP for high RAM servers?



I set this up, but not sure if I am under utilizing available resources:



<IfModule mpm_prefork_module>
StartServers 200
MinSpareServers 20
MaxSpareServers 50
MaxRequestWorkers 100
MaxConnectionsPerChild 0
ServerLimit 512
MaxClients 512
MaxRequestsPerChild 10000
</IfModule>


A more complete overview of my settings including variables, status, mysqltuner.pl report can be found here: http://plnkr.co/edit/eeGHzFX95j5auJ5lTYum?p=catalogue



Updates



We are receiving about 5600 requests per hour right now, about 70% may have up to 500 records per request that needs an update or insert query. That adds up to around 550 queries per second. Server load is commonly between 2.5-4.



The website was written in Laravel 5.4 and we tested throughput to the normal API routes using Laravel, Eloquent, and so on and when using Apache Benchmark using the following: ab -c 100 -n 2000 -p sample.json -T application/json -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiO" https://www.myserver.com/api/accounting



Here are the results:



Benchmarking www.myserver.com (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software: Apache/2.4.29
Server Hostname: www.myserver.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-CHACHA20-POLY1305,2048,256
TLS Server Name: www.myserver.com

Document Path: /api/accounting
Document Length: 65 bytes

Concurrency Level: 100
Time taken for tests: 375.487 seconds
Complete requests: 2000
Failed requests: 1134
(Connect: 0, Receive: 0, Length: 1134, Exceptions: 0)
Total transferred: 735018 bytes
Total body sent: 162864000
HTML transferred: 131018 bytes
Requests per second: 5.33 [#/sec] (mean)
Time per request: 18774.370 [ms] (mean)
Time per request: 187.744 [ms] (mean, across all concurrent requests)
Transfer rate: 1.91 [Kbytes/sec] received
423.57 kb/s sent
425.49 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 315 1554.1 5 11497
Processing: 8420 18299 2501.9 18658 24051
Waiting: 8419 18298 2501.9 18658 24050
Total: 8424 18614 2791.2 18792 30388

Percentage of the requests served within a certain time (ms)
50% 18792
66% 19699
75% 20247
80% 20619
90% 21560
95% 22343
98% 23933
99% 27099
100% 30388 (longest request)


sample.json contained 500 records and we had the server load hit a load of 103. You will also notice we had over half our posts fail.



enter image description here



It seems apache is our bottleneck, and as I dug into it using get_included_files() I found that Laravel uses 275 includes just to get to the routes.php file, by the time it starts posting to our API, it uses 462, and by the end of posting to the API it uses 575 included files.



We rebuilt the same function outside Laravel using a single PHP page that defined the PDO connection, looped over the data queries in the same way generating queries for inserts and updates and it completed the same task with these stats:



Concurrency Level: 100
Time taken for tests: 16.367 seconds
Complete requests: 2000
Failed requests: 228
(Connect: 0, Receive: 0, Length: 228, Exceptions: 0)
Total transferred: 502228 bytes
Total body sent: 162804000
HTML transferred: 126228 bytes
Requests per second: 122.19 [#/sec] (mean)
Time per request: 818.366 [ms] (mean)
Time per request: 8.184 [ms] (mean, across all concurrent requests)
Transfer rate: 29.97 [Kbytes/sec] received
9713.76 kb/s sent
9743.73 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 9 14.7 6 98
Processing: 242 800 281.3 764 2187
Waiting: 241 799 281.3 764 2187
Total: 246 809 283.8 774 2195

Percentage of the requests served within a certain time (ms)
50% 774
66% 905
75% 986
80% 1040
90% 1201
95% 1328
98% 1493
99% 1618
100% 2195 (longest request)


Server load only hit 12 while posting these with 0 failed posts. Because of the significant improvement, we are looking at pulling the API code out of Laraverl and optimizing one server for Mysql, and then having multiple slaves. Each slave would have read only access to localhost for the API to query to determine whether each record should be an update or insert statement, then would execute queries on the MySQL master.



While I have looked around for answers, so many resources were written when 4GB-32GB RAM was normal, and when you do find one with 512GB, it usually refers to an SSD.










share|improve this question
















Use Case:



API Server Cluster Optimized for JSON Data Submission



I have a distributed application sending info to API servers that are slaves of a MySQL 8 master. Applications do full initial sync (~100,000 records in batches of 500) followed by incremental syncs every 5 minutes.



I have 3 Dell R620 servers with 512GB RAM, 5 SSD in RAID 6 that are acting as web servers. I have dedicated one to being a master MySQL using the following config:



[mysqld]
server-id=1
# GENERAL #
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql/
bind-address=*

# GENERAL #
user = mysql
default-storage-engine = InnoDB

# MyISAM #
key-buffer-size = 32M
myisam-recover-options = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
skip-name-resolve
#skip-grant-tables

# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
binlog_expire_logs_seconds = 2000
sync-binlog = 1

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 10000
table-definition-cache = 4096
table-open-cache = 4096
#
## INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 360G
#
## LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
#

## REPLICATION ##
slave-parallel-workers=10
slave-parallel-type = LOGICAL_CLOCK
innodb-flush-log-at-timeout=1800

[mysql]

# CLIENT #
port = 3306


On the other servers that host the API, the goal is for them to do select queries on the local slave server, and write changes back to the master which will allow us to have additional resources dedicated to receiving incoming API calls. Because they are primarily for Apache / PHP, I have reduced innodb-buffer-pool-size = 64G.



What optimizations should I use for Apache and PHP for high RAM servers?



I set this up, but not sure if I am under utilizing available resources:



<IfModule mpm_prefork_module>
StartServers 200
MinSpareServers 20
MaxSpareServers 50
MaxRequestWorkers 100
MaxConnectionsPerChild 0
ServerLimit 512
MaxClients 512
MaxRequestsPerChild 10000
</IfModule>


A more complete overview of my settings including variables, status, mysqltuner.pl report can be found here: http://plnkr.co/edit/eeGHzFX95j5auJ5lTYum?p=catalogue



Updates



We are receiving about 5600 requests per hour right now, about 70% may have up to 500 records per request that needs an update or insert query. That adds up to around 550 queries per second. Server load is commonly between 2.5-4.



The website was written in Laravel 5.4 and we tested throughput to the normal API routes using Laravel, Eloquent, and so on and when using Apache Benchmark using the following: ab -c 100 -n 2000 -p sample.json -T application/json -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiO" https://www.myserver.com/api/accounting



Here are the results:



Benchmarking www.myserver.com (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software: Apache/2.4.29
Server Hostname: www.myserver.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-CHACHA20-POLY1305,2048,256
TLS Server Name: www.myserver.com

Document Path: /api/accounting
Document Length: 65 bytes

Concurrency Level: 100
Time taken for tests: 375.487 seconds
Complete requests: 2000
Failed requests: 1134
(Connect: 0, Receive: 0, Length: 1134, Exceptions: 0)
Total transferred: 735018 bytes
Total body sent: 162864000
HTML transferred: 131018 bytes
Requests per second: 5.33 [#/sec] (mean)
Time per request: 18774.370 [ms] (mean)
Time per request: 187.744 [ms] (mean, across all concurrent requests)
Transfer rate: 1.91 [Kbytes/sec] received
423.57 kb/s sent
425.49 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 315 1554.1 5 11497
Processing: 8420 18299 2501.9 18658 24051
Waiting: 8419 18298 2501.9 18658 24050
Total: 8424 18614 2791.2 18792 30388

Percentage of the requests served within a certain time (ms)
50% 18792
66% 19699
75% 20247
80% 20619
90% 21560
95% 22343
98% 23933
99% 27099
100% 30388 (longest request)


sample.json contained 500 records and we had the server load hit a load of 103. You will also notice we had over half our posts fail.



enter image description here



It seems apache is our bottleneck, and as I dug into it using get_included_files() I found that Laravel uses 275 includes just to get to the routes.php file, by the time it starts posting to our API, it uses 462, and by the end of posting to the API it uses 575 included files.



We rebuilt the same function outside Laravel using a single PHP page that defined the PDO connection, looped over the data queries in the same way generating queries for inserts and updates and it completed the same task with these stats:



Concurrency Level: 100
Time taken for tests: 16.367 seconds
Complete requests: 2000
Failed requests: 228
(Connect: 0, Receive: 0, Length: 228, Exceptions: 0)
Total transferred: 502228 bytes
Total body sent: 162804000
HTML transferred: 126228 bytes
Requests per second: 122.19 [#/sec] (mean)
Time per request: 818.366 [ms] (mean)
Time per request: 8.184 [ms] (mean, across all concurrent requests)
Transfer rate: 29.97 [Kbytes/sec] received
9713.76 kb/s sent
9743.73 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 9 14.7 6 98
Processing: 242 800 281.3 764 2187
Waiting: 241 799 281.3 764 2187
Total: 246 809 283.8 774 2195

Percentage of the requests served within a certain time (ms)
50% 774
66% 905
75% 986
80% 1040
90% 1201
95% 1328
98% 1493
99% 1618
100% 2195 (longest request)


Server load only hit 12 while posting these with 0 failed posts. Because of the significant improvement, we are looking at pulling the API code out of Laraverl and optimizing one server for Mysql, and then having multiple slaves. Each slave would have read only access to localhost for the API to query to determine whether each record should be an update or insert statement, then would execute queries on the MySQL master.



While I have looked around for answers, so many resources were written when 4GB-32GB RAM was normal, and when you do find one with 512GB, it usually refers to an SSD.







mysql php apache2 performance-tuning






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 27 at 15:31







Alan

















asked Mar 19 at 20:57









AlanAlan

308416




308416







  • 1





    Tuning almost always should include measuring/logging and then a bit of trial and error with some good testing. The popular solution these days is to scaling out, not scaling to large 512GB systems. So I suspect you may on your own a bit here.

    – Zoredache
    Mar 20 at 0:06







  • 1





    @Zoredache I found these servers at a computer recycler (they get tens of thousands of servers per year as corporations upgrade) and got a killer deal on them ($1500 each). So while I realize we would normally put in more servers and scale laterally, I would love to know how to get the most out of these before I add on more. Michael, I updated the question with more details.

    – Alan
    Mar 20 at 17:09






  • 1





    Thanks, the master has dual 6 core E5-2667 v1 @2.9Ghz, the slaves have dual 4 core E5-2643 v2 @ 3.50GHz. I am using default settings in the BIOS

    – Alan
    Mar 20 at 18:24







  • 1





    I should add that at this time we are a startup with only about 50 customers, we are envisioning having thousands but have waited to do any advertising or sales before we have things as optimized as they can be.

    – Alan
    Mar 20 at 18:42






  • 1





    What OS and PHP version are you using? Have you configured and enabled opcache?

    – shodanshok
    Mar 20 at 20:32












  • 1





    Tuning almost always should include measuring/logging and then a bit of trial and error with some good testing. The popular solution these days is to scaling out, not scaling to large 512GB systems. So I suspect you may on your own a bit here.

    – Zoredache
    Mar 20 at 0:06







  • 1





    @Zoredache I found these servers at a computer recycler (they get tens of thousands of servers per year as corporations upgrade) and got a killer deal on them ($1500 each). So while I realize we would normally put in more servers and scale laterally, I would love to know how to get the most out of these before I add on more. Michael, I updated the question with more details.

    – Alan
    Mar 20 at 17:09






  • 1





    Thanks, the master has dual 6 core E5-2667 v1 @2.9Ghz, the slaves have dual 4 core E5-2643 v2 @ 3.50GHz. I am using default settings in the BIOS

    – Alan
    Mar 20 at 18:24







  • 1





    I should add that at this time we are a startup with only about 50 customers, we are envisioning having thousands but have waited to do any advertising or sales before we have things as optimized as they can be.

    – Alan
    Mar 20 at 18:42






  • 1





    What OS and PHP version are you using? Have you configured and enabled opcache?

    – shodanshok
    Mar 20 at 20:32







1




1





Tuning almost always should include measuring/logging and then a bit of trial and error with some good testing. The popular solution these days is to scaling out, not scaling to large 512GB systems. So I suspect you may on your own a bit here.

– Zoredache
Mar 20 at 0:06






Tuning almost always should include measuring/logging and then a bit of trial and error with some good testing. The popular solution these days is to scaling out, not scaling to large 512GB systems. So I suspect you may on your own a bit here.

– Zoredache
Mar 20 at 0:06





1




1





@Zoredache I found these servers at a computer recycler (they get tens of thousands of servers per year as corporations upgrade) and got a killer deal on them ($1500 each). So while I realize we would normally put in more servers and scale laterally, I would love to know how to get the most out of these before I add on more. Michael, I updated the question with more details.

– Alan
Mar 20 at 17:09





@Zoredache I found these servers at a computer recycler (they get tens of thousands of servers per year as corporations upgrade) and got a killer deal on them ($1500 each). So while I realize we would normally put in more servers and scale laterally, I would love to know how to get the most out of these before I add on more. Michael, I updated the question with more details.

– Alan
Mar 20 at 17:09




1




1





Thanks, the master has dual 6 core E5-2667 v1 @2.9Ghz, the slaves have dual 4 core E5-2643 v2 @ 3.50GHz. I am using default settings in the BIOS

– Alan
Mar 20 at 18:24






Thanks, the master has dual 6 core E5-2667 v1 @2.9Ghz, the slaves have dual 4 core E5-2643 v2 @ 3.50GHz. I am using default settings in the BIOS

– Alan
Mar 20 at 18:24





1




1





I should add that at this time we are a startup with only about 50 customers, we are envisioning having thousands but have waited to do any advertising or sales before we have things as optimized as they can be.

– Alan
Mar 20 at 18:42





I should add that at this time we are a startup with only about 50 customers, we are envisioning having thousands but have waited to do any advertising or sales before we have things as optimized as they can be.

– Alan
Mar 20 at 18:42




1




1





What OS and PHP version are you using? Have you configured and enabled opcache?

– shodanshok
Mar 20 at 20:32





What OS and PHP version are you using? Have you configured and enabled opcache?

– shodanshok
Mar 20 at 20:32










2 Answers
2






active

oldest

votes


















1














Suggestion for your ulimit -a results,



ulimit -n 24000 to enable more than current limit of 1024 Open Files


The above is dynamic with Linux OS. Stop/Start services would have access to the handles.
To make this persistent across OS shutdown/restart, review this url for similar OS instructions.
These instructions set 500000 for the file-max, please set your capacity at 24000 for now.
ulimit please set to 24000, which will allow MySQL to use 10,000 requested
and have spares for other apps



https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/



Suggestions to consider for your my.cnf [mysqld] section (RPS = Rate Per Second)



innodb_buffer_pool_size=36G # from 240G because your innodb data+ndx ~ 22G
innodb_lru_scan_depth=100 # from 1024 to conserve 90% cpu cycles used for this function
max_connections=600 # from 500 - you are denying many connections today
innodb_io_capacity=1900 # from 200 to enable higher IOPS
read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS


Disclaimer: I am the author of content of web site mentioned in my profile, Network profile.






share|improve this answer




















  • 1





    Thank you Wilson, I will implement and do some benchmarks to compare

    – Alan
    Mar 27 at 22:39











  • @Alan Could you post feedback on your benchmark timings? Thanks

    – Wilson Hauck
    Apr 16 at 21:19



















1














Thundering herd. Don't have such a high number of Apache children. Set it no higher than, say, the number of cores (threads) in your cpu(s). Keep in mind that MySQL is also competing for those cores.



You have conflicting values for innodb_buffer_pool_size. Set it to perhaps twice the amount of data that you have, but not so big that it causes swapping. And set innodb_buffer_pool_instances = 16.



Is the "500" requests in a single INSERT or UPDATE or IODKU? If not, let's see what you are doing, and work toward making it a single SQL command. This may speed up the MySQL part by a factor of 10.



Related to that is the question of who decides between "inserting" and "updating"?



What is the relevance of "JSON"? Is there a big JSON string that splits into 500 inserts? Or is a JSON string the 'meat' of the insert?



To get a better feel for the activity: How many queries per second are you performaing? How many rows are inserted (or updated) per second?






share|improve this answer























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "2"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f959018%2fapache-tuning-for-512gb-ram%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    Suggestion for your ulimit -a results,



    ulimit -n 24000 to enable more than current limit of 1024 Open Files


    The above is dynamic with Linux OS. Stop/Start services would have access to the handles.
    To make this persistent across OS shutdown/restart, review this url for similar OS instructions.
    These instructions set 500000 for the file-max, please set your capacity at 24000 for now.
    ulimit please set to 24000, which will allow MySQL to use 10,000 requested
    and have spares for other apps



    https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/



    Suggestions to consider for your my.cnf [mysqld] section (RPS = Rate Per Second)



    innodb_buffer_pool_size=36G # from 240G because your innodb data+ndx ~ 22G
    innodb_lru_scan_depth=100 # from 1024 to conserve 90% cpu cycles used for this function
    max_connections=600 # from 500 - you are denying many connections today
    innodb_io_capacity=1900 # from 200 to enable higher IOPS
    read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS


    Disclaimer: I am the author of content of web site mentioned in my profile, Network profile.






    share|improve this answer




















    • 1





      Thank you Wilson, I will implement and do some benchmarks to compare

      – Alan
      Mar 27 at 22:39











    • @Alan Could you post feedback on your benchmark timings? Thanks

      – Wilson Hauck
      Apr 16 at 21:19
















    1














    Suggestion for your ulimit -a results,



    ulimit -n 24000 to enable more than current limit of 1024 Open Files


    The above is dynamic with Linux OS. Stop/Start services would have access to the handles.
    To make this persistent across OS shutdown/restart, review this url for similar OS instructions.
    These instructions set 500000 for the file-max, please set your capacity at 24000 for now.
    ulimit please set to 24000, which will allow MySQL to use 10,000 requested
    and have spares for other apps



    https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/



    Suggestions to consider for your my.cnf [mysqld] section (RPS = Rate Per Second)



    innodb_buffer_pool_size=36G # from 240G because your innodb data+ndx ~ 22G
    innodb_lru_scan_depth=100 # from 1024 to conserve 90% cpu cycles used for this function
    max_connections=600 # from 500 - you are denying many connections today
    innodb_io_capacity=1900 # from 200 to enable higher IOPS
    read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS


    Disclaimer: I am the author of content of web site mentioned in my profile, Network profile.






    share|improve this answer




















    • 1





      Thank you Wilson, I will implement and do some benchmarks to compare

      – Alan
      Mar 27 at 22:39











    • @Alan Could you post feedback on your benchmark timings? Thanks

      – Wilson Hauck
      Apr 16 at 21:19














    1












    1








    1







    Suggestion for your ulimit -a results,



    ulimit -n 24000 to enable more than current limit of 1024 Open Files


    The above is dynamic with Linux OS. Stop/Start services would have access to the handles.
    To make this persistent across OS shutdown/restart, review this url for similar OS instructions.
    These instructions set 500000 for the file-max, please set your capacity at 24000 for now.
    ulimit please set to 24000, which will allow MySQL to use 10,000 requested
    and have spares for other apps



    https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/



    Suggestions to consider for your my.cnf [mysqld] section (RPS = Rate Per Second)



    innodb_buffer_pool_size=36G # from 240G because your innodb data+ndx ~ 22G
    innodb_lru_scan_depth=100 # from 1024 to conserve 90% cpu cycles used for this function
    max_connections=600 # from 500 - you are denying many connections today
    innodb_io_capacity=1900 # from 200 to enable higher IOPS
    read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS


    Disclaimer: I am the author of content of web site mentioned in my profile, Network profile.






    share|improve this answer















    Suggestion for your ulimit -a results,



    ulimit -n 24000 to enable more than current limit of 1024 Open Files


    The above is dynamic with Linux OS. Stop/Start services would have access to the handles.
    To make this persistent across OS shutdown/restart, review this url for similar OS instructions.
    These instructions set 500000 for the file-max, please set your capacity at 24000 for now.
    ulimit please set to 24000, which will allow MySQL to use 10,000 requested
    and have spares for other apps



    https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/



    Suggestions to consider for your my.cnf [mysqld] section (RPS = Rate Per Second)



    innodb_buffer_pool_size=36G # from 240G because your innodb data+ndx ~ 22G
    innodb_lru_scan_depth=100 # from 1024 to conserve 90% cpu cycles used for this function
    max_connections=600 # from 500 - you are denying many connections today
    innodb_io_capacity=1900 # from 200 to enable higher IOPS
    read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS


    Disclaimer: I am the author of content of web site mentioned in my profile, Network profile.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Apr 18 at 2:45









    Ward

    11.7k73956




    11.7k73956










    answered Mar 27 at 18:38









    Wilson HauckWilson Hauck

    34718




    34718







    • 1





      Thank you Wilson, I will implement and do some benchmarks to compare

      – Alan
      Mar 27 at 22:39











    • @Alan Could you post feedback on your benchmark timings? Thanks

      – Wilson Hauck
      Apr 16 at 21:19













    • 1





      Thank you Wilson, I will implement and do some benchmarks to compare

      – Alan
      Mar 27 at 22:39











    • @Alan Could you post feedback on your benchmark timings? Thanks

      – Wilson Hauck
      Apr 16 at 21:19








    1




    1





    Thank you Wilson, I will implement and do some benchmarks to compare

    – Alan
    Mar 27 at 22:39





    Thank you Wilson, I will implement and do some benchmarks to compare

    – Alan
    Mar 27 at 22:39













    @Alan Could you post feedback on your benchmark timings? Thanks

    – Wilson Hauck
    Apr 16 at 21:19






    @Alan Could you post feedback on your benchmark timings? Thanks

    – Wilson Hauck
    Apr 16 at 21:19














    1














    Thundering herd. Don't have such a high number of Apache children. Set it no higher than, say, the number of cores (threads) in your cpu(s). Keep in mind that MySQL is also competing for those cores.



    You have conflicting values for innodb_buffer_pool_size. Set it to perhaps twice the amount of data that you have, but not so big that it causes swapping. And set innodb_buffer_pool_instances = 16.



    Is the "500" requests in a single INSERT or UPDATE or IODKU? If not, let's see what you are doing, and work toward making it a single SQL command. This may speed up the MySQL part by a factor of 10.



    Related to that is the question of who decides between "inserting" and "updating"?



    What is the relevance of "JSON"? Is there a big JSON string that splits into 500 inserts? Or is a JSON string the 'meat' of the insert?



    To get a better feel for the activity: How many queries per second are you performaing? How many rows are inserted (or updated) per second?






    share|improve this answer



























      1














      Thundering herd. Don't have such a high number of Apache children. Set it no higher than, say, the number of cores (threads) in your cpu(s). Keep in mind that MySQL is also competing for those cores.



      You have conflicting values for innodb_buffer_pool_size. Set it to perhaps twice the amount of data that you have, but not so big that it causes swapping. And set innodb_buffer_pool_instances = 16.



      Is the "500" requests in a single INSERT or UPDATE or IODKU? If not, let's see what you are doing, and work toward making it a single SQL command. This may speed up the MySQL part by a factor of 10.



      Related to that is the question of who decides between "inserting" and "updating"?



      What is the relevance of "JSON"? Is there a big JSON string that splits into 500 inserts? Or is a JSON string the 'meat' of the insert?



      To get a better feel for the activity: How many queries per second are you performaing? How many rows are inserted (or updated) per second?






      share|improve this answer

























        1












        1








        1







        Thundering herd. Don't have such a high number of Apache children. Set it no higher than, say, the number of cores (threads) in your cpu(s). Keep in mind that MySQL is also competing for those cores.



        You have conflicting values for innodb_buffer_pool_size. Set it to perhaps twice the amount of data that you have, but not so big that it causes swapping. And set innodb_buffer_pool_instances = 16.



        Is the "500" requests in a single INSERT or UPDATE or IODKU? If not, let's see what you are doing, and work toward making it a single SQL command. This may speed up the MySQL part by a factor of 10.



        Related to that is the question of who decides between "inserting" and "updating"?



        What is the relevance of "JSON"? Is there a big JSON string that splits into 500 inserts? Or is a JSON string the 'meat' of the insert?



        To get a better feel for the activity: How many queries per second are you performaing? How many rows are inserted (or updated) per second?






        share|improve this answer













        Thundering herd. Don't have such a high number of Apache children. Set it no higher than, say, the number of cores (threads) in your cpu(s). Keep in mind that MySQL is also competing for those cores.



        You have conflicting values for innodb_buffer_pool_size. Set it to perhaps twice the amount of data that you have, but not so big that it causes swapping. And set innodb_buffer_pool_instances = 16.



        Is the "500" requests in a single INSERT or UPDATE or IODKU? If not, let's see what you are doing, and work toward making it a single SQL command. This may speed up the MySQL part by a factor of 10.



        Related to that is the question of who decides between "inserting" and "updating"?



        What is the relevance of "JSON"? Is there a big JSON string that splits into 500 inserts? Or is a JSON string the 'meat' of the insert?



        To get a better feel for the activity: How many queries per second are you performaing? How many rows are inserted (or updated) per second?







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 24 at 0:43









        Rick JamesRick James

        2136




        2136



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Server Fault!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f959018%2fapache-tuning-for-512gb-ram%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

            Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

            What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company