Slow Single Thread Performance on Dell R620 and Windows Server 2019How can I make this setup faster?Windows Storage Server and SSDs in Raid 0 - Slow performanceIs it possible to install Windows Server 2012 on a Dell PowerEdge with PERC S100 card?How to benchmark maxCache 3.0 with SSDs?Slow writes Dell Server R720Tuning mdadm RAID 5 performance?Dell PowerEdge T130 with S130 RAID Controller - Driver for Windows Server 2016 nonexistentDetermining if upgrading to a fast PCIe SSD drive is worthwhile on a dual Xeon Nehalem system with PCIe v2.0Transfer Speed Sudden Drop on Intel Raid Card between (2) 4 SSD Raid 0 ArraysNVME vs SSD: Read/Write performance issue

If there's something that implicates the president why is there then a national security issue? (John Dowd)

How can one's career as a reviewer be ended?

Proving that a Russian cryptographic standard is too structured

Non-aqueous eyes?

Teaching a class likely meant to inflate the GPA of student athletes

What would be the way to say "just saying" in German? (Not the literal translation)

Solve Riddle With Algebra

Next date with distinct digits

Someone whose aspirations exceed abilities or means

Is it possible for a vehicle to be manufactured without a catalytic converter?

How can I remove material from this wood beam?

Russian word for a male zebra

Why was this person allowed to become Grand Maester?

How to “listen” to existing circuit

How to make insert mode mapping count as multiple undos?

Does putting salt first make it easier for attacker to bruteforce the hash?

Separate SPI data

A map of non-pathological topology?

60s or 70s novel about Empire of Man making 1st contact with 1st discovered alien race

Should I put programming books I wrote a few years ago on my resume?

How to trick the reader into thinking they're following a redshirt instead of the protagonist?

What is the color of artificial intelligence?

Has there been a multiethnic Star Trek character?

Why is long-term living in Almost-Earth causing severe health problems?



Slow Single Thread Performance on Dell R620 and Windows Server 2019


How can I make this setup faster?Windows Storage Server and SSDs in Raid 0 - Slow performanceIs it possible to install Windows Server 2012 on a Dell PowerEdge with PERC S100 card?How to benchmark maxCache 3.0 with SSDs?Slow writes Dell Server R720Tuning mdadm RAID 5 performance?Dell PowerEdge T130 with S130 RAID Controller - Driver for Windows Server 2016 nonexistentDetermining if upgrading to a fast PCIe SSD drive is worthwhile on a dual Xeon Nehalem system with PCIe v2.0Transfer Speed Sudden Drop on Intel Raid Card between (2) 4 SSD Raid 0 ArraysNVME vs SSD: Read/Write performance issue






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!



System:

R620

Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests

Dual E5-2650v2

128GB (16x8GB PC3L-12800R)

H710p mini mono

5x Intel D3-S4610 960GB SSDs in Raid 5

Intel X540 NIC



Using CrystalMark 3 - 9/4GB:

My system

Read / Write

Seq: 1018 / 1637

512K: 743 / 1158

4K: 19 / 23

4k QD32: 204 / 75



Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/

Read / Write

Seq: 1855 / 1912

512K: 1480 / 1419

4K: 34 / 51

4k QD32: 651 / 88



Using CrystalMark 6 - 2/100mb:

my system

Read / Write

Seq Q32T1: 3022 / 3461

4k Q8T8: 335 / 290

4K Q32T1: 210 / 195

4K Q1T1: 32 / 30



Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM

Read / Write

Seq Q32T1: 554 / 264

4k Q8T8: 314 / 259

4K Q32T1: 316 / 261

4K Q1T1: 33 / 115



Using CrystalMark 6 - 5/1GB:

My system

Read / Write

Seq Q32T1: 2619 / 1957

4k Q8T8: 306 / 132

4K Q32T1: 212 / 116

4K Q1T1: 25 / 27



Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700

Read / Write

Seq Q32T1: 754 / 685

4k Q8T8: 305 / 69

4K Q32T1: 262 / 69

4K Q1T1: 32 / 38



Here are some real world numbers compared to my old R610 system



Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node

R610 - 1.7 million recs/min

R620 - 1.16 million recs/min



Copy folder with thousands of small files from VM to Host

R610 - 23 seconds

R620 - 2 min 40 seconds



Alternatively, large file copies show good performance with R620 beating R610 by about 35%.










share|improve this question



















  • 1





    CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.

    – John Mahowald
    May 22 at 3:39











  • Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!

    – Justin M
    May 22 at 20:17











  • Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!

    – yagmoth555
    May 23 at 0:01












  • The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.

    – Justin M
    May 23 at 16:26











  • It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks

    – Justin M
    May 24 at 17:44

















1















I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!



System:

R620

Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests

Dual E5-2650v2

128GB (16x8GB PC3L-12800R)

H710p mini mono

5x Intel D3-S4610 960GB SSDs in Raid 5

Intel X540 NIC



Using CrystalMark 3 - 9/4GB:

My system

Read / Write

Seq: 1018 / 1637

512K: 743 / 1158

4K: 19 / 23

4k QD32: 204 / 75



Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/

Read / Write

Seq: 1855 / 1912

512K: 1480 / 1419

4K: 34 / 51

4k QD32: 651 / 88



Using CrystalMark 6 - 2/100mb:

my system

Read / Write

Seq Q32T1: 3022 / 3461

4k Q8T8: 335 / 290

4K Q32T1: 210 / 195

4K Q1T1: 32 / 30



Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM

Read / Write

Seq Q32T1: 554 / 264

4k Q8T8: 314 / 259

4K Q32T1: 316 / 261

4K Q1T1: 33 / 115



Using CrystalMark 6 - 5/1GB:

My system

Read / Write

Seq Q32T1: 2619 / 1957

4k Q8T8: 306 / 132

4K Q32T1: 212 / 116

4K Q1T1: 25 / 27



Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700

Read / Write

Seq Q32T1: 754 / 685

4k Q8T8: 305 / 69

4K Q32T1: 262 / 69

4K Q1T1: 32 / 38



Here are some real world numbers compared to my old R610 system



Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node

R610 - 1.7 million recs/min

R620 - 1.16 million recs/min



Copy folder with thousands of small files from VM to Host

R610 - 23 seconds

R620 - 2 min 40 seconds



Alternatively, large file copies show good performance with R620 beating R610 by about 35%.










share|improve this question



















  • 1





    CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.

    – John Mahowald
    May 22 at 3:39











  • Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!

    – Justin M
    May 22 at 20:17











  • Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!

    – yagmoth555
    May 23 at 0:01












  • The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.

    – Justin M
    May 23 at 16:26











  • It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks

    – Justin M
    May 24 at 17:44













1












1








1








I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!



System:

R620

Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests

Dual E5-2650v2

128GB (16x8GB PC3L-12800R)

H710p mini mono

5x Intel D3-S4610 960GB SSDs in Raid 5

Intel X540 NIC



Using CrystalMark 3 - 9/4GB:

My system

Read / Write

Seq: 1018 / 1637

512K: 743 / 1158

4K: 19 / 23

4k QD32: 204 / 75



Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/

Read / Write

Seq: 1855 / 1912

512K: 1480 / 1419

4K: 34 / 51

4k QD32: 651 / 88



Using CrystalMark 6 - 2/100mb:

my system

Read / Write

Seq Q32T1: 3022 / 3461

4k Q8T8: 335 / 290

4K Q32T1: 210 / 195

4K Q1T1: 32 / 30



Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM

Read / Write

Seq Q32T1: 554 / 264

4k Q8T8: 314 / 259

4K Q32T1: 316 / 261

4K Q1T1: 33 / 115



Using CrystalMark 6 - 5/1GB:

My system

Read / Write

Seq Q32T1: 2619 / 1957

4k Q8T8: 306 / 132

4K Q32T1: 212 / 116

4K Q1T1: 25 / 27



Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700

Read / Write

Seq Q32T1: 754 / 685

4k Q8T8: 305 / 69

4K Q32T1: 262 / 69

4K Q1T1: 32 / 38



Here are some real world numbers compared to my old R610 system



Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node

R610 - 1.7 million recs/min

R620 - 1.16 million recs/min



Copy folder with thousands of small files from VM to Host

R610 - 23 seconds

R620 - 2 min 40 seconds



Alternatively, large file copies show good performance with R620 beating R610 by about 35%.










share|improve this question
















I recently purchased some new to me r620 servers for a cluster. Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work. It was during the database work that I started realizing that the servers were performing much worse than my old r610. Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online. Mostly my random single threaded performance seems to be horrible. Changing the bios to Performance helped a lot, but I'm still running slow. Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way. Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results). Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal? Thanks for any advice!



System:

R620

Windows Server 2019 Core with Hyper-V - Server 2019 and Ubuntu 18.04 guests

Dual E5-2650v2

128GB (16x8GB PC3L-12800R)

H710p mini mono

5x Intel D3-S4610 960GB SSDs in Raid 5

Intel X540 NIC



Using CrystalMark 3 - 9/4GB:

My system

Read / Write

Seq: 1018 / 1637

512K: 743 / 1158

4K: 19 / 23

4k QD32: 204 / 75



Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/

Read / Write

Seq: 1855 / 1912

512K: 1480 / 1419

4K: 34 / 51

4k QD32: 651 / 88



Using CrystalMark 6 - 2/100mb:

my system

Read / Write

Seq Q32T1: 3022 / 3461

4k Q8T8: 335 / 290

4K Q32T1: 210 / 195

4K Q1T1: 32 / 30



Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM

Read / Write

Seq Q32T1: 554 / 264

4k Q8T8: 314 / 259

4K Q32T1: 316 / 261

4K Q1T1: 33 / 115



Using CrystalMark 6 - 5/1GB:

My system

Read / Write

Seq Q32T1: 2619 / 1957

4k Q8T8: 306 / 132

4K Q32T1: 212 / 116

4K Q1T1: 25 / 27



Comparison system - R610, Hyper-V Core 2012R2 -2008R2 Guests - Dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700

Read / Write

Seq Q32T1: 754 / 685

4k Q8T8: 305 / 69

4K Q32T1: 262 / 69

4K Q1T1: 32 / 38



Here are some real world numbers compared to my old R610 system



Export same database table from a local mariadb to a single R620 Mariadb Galera cluster node

R610 - 1.7 million recs/min

R620 - 1.16 million recs/min



Copy folder with thousands of small files from VM to Host

R610 - 23 seconds

R620 - 2 min 40 seconds



Alternatively, large file copies show good performance with R620 beating R610 by about 35%.







dell-poweredge ssd hardware-raid dell-perc






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 24 at 17:45







Justin M

















asked May 21 at 18:35









Justin MJustin M

463




463







  • 1





    CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.

    – John Mahowald
    May 22 at 3:39











  • Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!

    – Justin M
    May 22 at 20:17











  • Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!

    – yagmoth555
    May 23 at 0:01












  • The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.

    – Justin M
    May 23 at 16:26











  • It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks

    – Justin M
    May 24 at 17:44












  • 1





    CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.

    – John Mahowald
    May 22 at 3:39











  • Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!

    – Justin M
    May 22 at 20:17











  • Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!

    – yagmoth555
    May 23 at 0:01












  • The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.

    – Justin M
    May 23 at 16:26











  • It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks

    – Justin M
    May 24 at 17:44







1




1





CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.

– John Mahowald
May 22 at 3:39





CrystalMark is not a database benchmark. Edit your question to add which DBMS engine you are using, and real throughput or response time numbers for each.

– John Mahowald
May 22 at 3:39













Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!

– Justin M
May 22 at 20:17





Thanks for the response John. I use crystalmark since it seems to be a benchmark I can run quickly that shows my R620's drive performance against similar systems I've found online. I have edited my post to show two real world examples against my old R610. Encountering multiple issues like these is what caused me to originally question the performance of the new servers. I've tweaked every network, drive, controller, bios, and hyper-v setting I can find. I've changed Nics, drives, and controllers now, so any advice is greatly appreciated!

– Justin M
May 22 at 20:17













Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!

– yagmoth555
May 23 at 0:01






Hi, the copy test from VM to host use different OS ? as such it’s not a good test, as we can’t know if the integration driver work good in your ubuntu vs the 2008r2. Please fire up same OS VM to make a benchmark, thanks!

– yagmoth555
May 23 at 0:01














The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.

– Justin M
May 23 at 16:26





The R610 tests are on a Windows 2012R2 host from a 2008R2 vm, and the R620 tests are on Windows 2019 hosts from 2019 vms. I just mentioned Ubuntu to give as much info about the system I'm running as possible. I am going to try 2016 today and see if maybe there is an issue with 2019 that hasn't been uncovered yet.

– Justin M
May 23 at 16:26













It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks

– Justin M
May 24 at 17:44





It was 2019 after all. If anyone has any ideas why, it would save me a ton of work downgrading and redoing all my hosts and vms. Thanks

– Justin M
May 24 at 17:44










2 Answers
2






active

oldest

votes


















0














Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.






share|improve this answer






























    0














    I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.






    share|improve this answer























    • Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

      – Justin M
      May 24 at 19:20











    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "2"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f968270%2fslow-single-thread-performance-on-dell-r620-and-windows-server-2019%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.






    share|improve this answer



























      0














      Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.






      share|improve this answer

























        0












        0








        0







        Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.






        share|improve this answer













        Server 2019 is the problem after all. I've tried tweaking every setting, changing every piece of hardware, and updating everything to current as of May 2019. In the end the system performed well out of the box with Server 2016.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered May 24 at 17:43









        Justin MJustin M

        463




        463























            0














            I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.






            share|improve this answer























            • Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

              – Justin M
              May 24 at 19:20















            0














            I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.






            share|improve this answer























            • Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

              – Justin M
              May 24 at 19:20













            0












            0








            0







            I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.






            share|improve this answer













            I'm assuming you've attempted to manually configure your NUMA settings to see SQL as SQL is a NUMA aware application? Just grasping for strings here, but it's a thought.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 24 at 18:39









            bloonachobloonacho

            134




            134












            • Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

              – Justin M
              May 24 at 19:20

















            • Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

              – Justin M
              May 24 at 19:20
















            Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

            – Justin M
            May 24 at 19:20





            Yep, I even balanced network ports across numa nodes, as well as vms, etc. Not sure what broke with Server 2019, but even the Hyper-V VMs with Server 2019 run slower than 2008-2016 VMs on a 2019 host. Which I would think VMs would be a bit more hardware agnostic than the host. Nevertheless, changing the host to Server 2016 showed a further 375% improvement in diskmark.

            – Justin M
            May 24 at 19:20

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Server Fault!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f968270%2fslow-single-thread-performance-on-dell-r620-and-windows-server-2019%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

            Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

            Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020