Slow Single Thread Performance on Dell R620 and Windows Server 2019How can I make this setup faster?Windows Storage Server and SSDs in Raid 0 - Slow performanceIs it possible to install Windows Server 2012 on a Dell PowerEdge with PERC S100 card?How to benchmark maxCache 3.0 with SSDs?Slow writes Dell Server R720Tuning mdadm RAID 5 performance?Dell PowerEdge T130 with S130 RAID Controller - Driver for Windows Server 2016 nonexistentDetermining if upgrading to a fast PCIe SSD drive is worthwhile on a dual Xeon Nehalem system with PCIe v2.0Transfer Speed Sudden Drop on Intel Raid Card between (2) 4 SSD Raid 0 ArraysNVME vs SSD: Read/Write performance issue
If there's something that implicates the president why is there then a national security issue? (John Dowd)
How can one's career as a reviewer be ended?
Proving that a Russian cryptographic standard is too structured
Non-aqueous eyes?
Teaching a class likely meant to inflate the GPA of student athletes
What would be the way to say "just saying" in German? (Not the literal translation)
Solve Riddle With Algebra
Next date with distinct digits
Someone whose aspirations exceed abilities or means
Is it possible for a vehicle to be manufactured without a catalytic converter?
How can I remove material from this wood beam?
Russian word for a male zebra
Why was this person allowed to become Grand Maester?
How to “listen” to existing circuit
How to make insert mode mapping count as multiple undos?
Does putting salt first make it easier for attacker to bruteforce the hash?
Separate SPI data
A map of non-pathological topology?
60s or 70s novel about Empire of Man making 1st contact with 1st discovered alien race
Should I put programming books I wrote a few years ago on my resume?
How to trick the reader into thinking they're following a redshirt instead of the protagonist?
What is the color of artificial intelligence?
Has there been a multiethnic Star Trek character?
Why is long-term living in Almost-Earth causing severe health problems?
Slow Single Thread Performance on Dell R620 and Windows Server 2019
How can I make this setup faster?Windows Storage Server and SSDs in Raid 0 - Slow performanceIs it possible to install Windows Server 2012 on a Dell PowerEdge with PERC S100 card?How to benchmark maxCache 3.0 with SSDs?Slow writes Dell Server R720Tuning mdadm RAID 5 performance?Dell PowerEdge T130 with S130 RAID Controller - Driver for Windows Server 2016 nonexistentDetermining if upgrading to a fast PCIe SSD drive is worthwhile on a dual Xeon Nehalem system with PCIe v2.0Transfer Speed Sudden Drop on Intel Raid Card between (2) 4 SSD Raid 0 ArraysNVME vs SSD: Read/Write performance issue
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!
System:
R620
Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests
Dual E5-2650v2
128GB (16x8GB PC3L-12800R)
H710p mini mono
5x Intel D3-S4610 960GB SSDs in Raid 5
Intel X540 NIC
Using CrystalMark 3 - 9/4GB:
My system
Read / Write
Seq: 1018 / 1637
512K: 743 / 1158
4K: 19 / 23
4k QD32: 204 / 75
Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/
Read / Write
Seq: 1855 / 1912
512K: 1480 / 1419
4K: 34 / 51
4k QD32: 651 / 88
Using CrystalMark 6 - 2/100mb:
my system
Read / Write
Seq Q32T1: 3022 / 3461
4k Q8T8: 335 / 290
4K Q32T1: 210 / 195
4K Q1T1: 32 / 30
Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM
Read / Write
Seq Q32T1: 554 / 264
4k Q8T8: 314 / 259
4K Q32T1: 316 / 261
4K Q1T1: 33 / 115
Using CrystalMark 6 - 5/1GB:
My system
Read / Write
Seq Q32T1: 2619 / 1957
4k Q8T8: 306 / 132
4K Q32T1: 212 / 116
4K Q1T1: 25 / 27
Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700
Read / Write
Seq Q32T1: 754 / 685
4k Q8T8: 305 / 69
4K Q32T1: 262 / 69
4K Q1T1: 32 / 38
Here are some real world numbers compared to my old R610 system
Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node
R610 - 1.7 million recs/min
R620 - 1.16 million recs/min
Copy folder with thousands of small files from VM to Host
R610 - 23 seconds
R620 - 2 min 40 seconds
Alternatively, large file copies show good performance with R620 beating R610 by about 35%.
dell-poweredge ssd hardware-raid dell-perc
add a comment |
I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!
System:
R620
Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests
Dual E5-2650v2
128GB (16x8GB PC3L-12800R)
H710p mini mono
5x Intel D3-S4610 960GB SSDs in Raid 5
Intel X540 NIC
Using CrystalMark 3 - 9/4GB:
My system
Read / Write
Seq: 1018 / 1637
512K: 743 / 1158
4K: 19 / 23
4k QD32: 204 / 75
Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/
Read / Write
Seq: 1855 / 1912
512K: 1480 / 1419
4K: 34 / 51
4k QD32: 651 / 88
Using CrystalMark 6 - 2/100mb:
my system
Read / Write
Seq Q32T1: 3022 / 3461
4k Q8T8: 335 / 290
4K Q32T1: 210 / 195
4K Q1T1: 32 / 30
Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM
Read / Write
Seq Q32T1: 554 / 264
4k Q8T8: 314 / 259
4K Q32T1: 316 / 261
4K Q1T1: 33 / 115
Using CrystalMark 6 - 5/1GB:
My system
Read / Write
Seq Q32T1: 2619 / 1957
4k Q8T8: 306 / 132
4K Q32T1: 212 / 116
4K Q1T1: 25 / 27
Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700
Read / Write
Seq Q32T1: 754 / 685
4k Q8T8: 305 / 69
4K Q32T1: 262 / 69
4K Q1T1: 32 / 38
Here are some real world numbers compared to my old R610 system
Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node
R610 - 1.7 million recs/min
R620 - 1.16 million recs/min
Copy folder with thousands of small files from VM to Host
R610 - 23 seconds
R620 - 2 min 40 seconds
Alternatively, large file copies show good performance with R620 beating R610 by about 35%.
dell-poweredge ssd hardware-raid dell-perc
1
CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.
– John Mahowald
May 22 at 3:39
Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!
– Justin M
May 22 at 20:17
Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!
– yagmoth555♦
May 23 at 0:01
The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.
– Justin M
May 23 at 16:26
It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks
– Justin M
May 24 at 17:44
add a comment |
I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!
System:
R620
Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests
Dual E5-2650v2
128GB (16x8GB PC3L-12800R)
H710p mini mono
5x Intel D3-S4610 960GB SSDs in Raid 5
Intel X540 NIC
Using CrystalMark 3 - 9/4GB:
My system
Read / Write
Seq: 1018 / 1637
512K: 743 / 1158
4K: 19 / 23
4k QD32: 204 / 75
Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/
Read / Write
Seq: 1855 / 1912
512K: 1480 / 1419
4K: 34 / 51
4k QD32: 651 / 88
Using CrystalMark 6 - 2/100mb:
my system
Read / Write
Seq Q32T1: 3022 / 3461
4k Q8T8: 335 / 290
4K Q32T1: 210 / 195
4K Q1T1: 32 / 30
Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM
Read / Write
Seq Q32T1: 554 / 264
4k Q8T8: 314 / 259
4K Q32T1: 316 / 261
4K Q1T1: 33 / 115
Using CrystalMark 6 - 5/1GB:
My system
Read / Write
Seq Q32T1: 2619 / 1957
4k Q8T8: 306 / 132
4K Q32T1: 212 / 116
4K Q1T1: 25 / 27
Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700
Read / Write
Seq Q32T1: 754 / 685
4k Q8T8: 305 / 69
4K Q32T1: 262 / 69
4K Q1T1: 32 / 38
Here are some real world numbers compared to my old R610 system
Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node
R610 - 1.7 million recs/min
R620 - 1.16 million recs/min
Copy folder with thousands of small files from VM to Host
R610 - 23 seconds
R620 - 2 min 40 seconds
Alternatively, large file copies show good performance with R620 beating R610 by about 35%.
dell-poweredge ssd hardware-raid dell-perc
I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!
System:
R620
Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests
Dual E5-2650v2
128GB (16x8GB PC3L-12800R)
H710p mini mono
5x Intel D3-S4610 960GB SSDs in Raid 5
Intel X540 NIC
Using CrystalMark 3 - 9/4GB:
My system
Read / Write
Seq: 1018 / 1637
512K: 743 / 1158
4K: 19 / 23
4k QD32: 204 / 75
Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/
Read / Write
Seq: 1855 / 1912
512K: 1480 / 1419
4K: 34 / 51
4k QD32: 651 / 88
Using CrystalMark 6 - 2/100mb:
my system
Read / Write
Seq Q32T1: 3022 / 3461
4k Q8T8: 335 / 290
4K Q32T1: 210 / 195
4K Q1T1: 32 / 30
Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM
Read / Write
Seq Q32T1: 554 / 264
4k Q8T8: 314 / 259
4K Q32T1: 316 / 261
4K Q1T1: 33 / 115
Using CrystalMark 6 - 5/1GB:
My system
Read / Write
Seq Q32T1: 2619 / 1957
4k Q8T8: 306 / 132
4K Q32T1: 212 / 116
4K Q1T1: 25 / 27
Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700
Read / Write
Seq Q32T1: 754 / 685
4k Q8T8: 305 / 69
4K Q32T1: 262 / 69
4K Q1T1: 32 / 38
Here are some real world numbers compared to my old R610 system
Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node
R610 - 1.7 million recs/min
R620 - 1.16 million recs/min
Copy folder with thousands of small files from VM to Host
R610 - 23 seconds
R620 - 2 min 40 seconds
Alternatively, large file copies show good performance with R620 beating R610 by about 35%.
dell-poweredge ssd hardware-raid dell-perc
dell-poweredge ssd hardware-raid dell-perc
edited May 24 at 17:45
Justin M
asked May 21 at 18:35
Justin MJustin M
463
463
1
CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.
– John Mahowald
May 22 at 3:39
Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!
– Justin M
May 22 at 20:17
Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!
– yagmoth555♦
May 23 at 0:01
The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.
– Justin M
May 23 at 16:26
It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks
– Justin M
May 24 at 17:44
add a comment |
1
CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.
– John Mahowald
May 22 at 3:39
Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!
– Justin M
May 22 at 20:17
Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!
– yagmoth555♦
May 23 at 0:01
The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.
– Justin M
May 23 at 16:26
It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks
– Justin M
May 24 at 17:44
1
1
CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.
– John Mahowald
May 22 at 3:39
CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.
– John Mahowald
May 22 at 3:39
Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!
– Justin M
May 22 at 20:17
Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!
– Justin M
May 22 at 20:17
Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!
– yagmoth555♦
May 23 at 0:01
Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!
– yagmoth555♦
May 23 at 0:01
The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.
– Justin M
May 23 at 16:26
The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.
– Justin M
May 23 at 16:26
It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks
– Justin M
May 24 at 17:44
It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks
– Justin M
May 24 at 17:44
add a comment |
2 Answers
2
active
oldest
votes
Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.
add a comment |
I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f968270%2fslow-single-thread-performance-on-dell-r620-and-windows-server-2019%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.
add a comment |
Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.
add a comment |
Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.
Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.
answered May 24 at 17:43
Justin MJustin M
463
463
add a comment |
add a comment |
I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
add a comment |
I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
add a comment |
I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.
I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.
answered May 24 at 18:39
bloonachobloonacho
134
134
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
add a comment |
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.
– Justin M
May 24 at 19:20
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f968270%2fslow-single-thread-performance-on-dell-r620-and-windows-server-2019%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.
– John Mahowald
May 22 at 3:39
Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!
– Justin M
May 22 at 20:17
Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!
– yagmoth555♦
May 23 at 0:01
The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.
– Justin M
May 23 at 16:26
It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks
– Justin M
May 24 at 17:44