LSI 9260-8i limited to max 200 MB/s (independently from the drives)MegaRaid Storage Manager Version to manage LSI 9260-8I through vSphere 4.1?for ZFS: passthru, I-T mode on LSI 9260-8i?LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?Configuring RAID 10 using MegaRAID Storage Manager on an LSI 9260 RAID cardPossible to build a RAID 1 with only one drive with LSI 9260-4i?Just installed LSI 9211; no drives showing up to LinuxLSI MegaRaid 9260-8i low performance in comparisonlsi megaraid 9260 single raid0 disk went unconfigured bad, please help recover dataRAID 10 with SSDs on LSI 9260-8i has bad performanceMigrate RAID 6 drives from LSI 3ware 9690SA-8I to LSI Megaraid 9271-8i

Windows OS quantum vs. SQL OS Quantum

Passport stamps art, can it be done?

What do "KAL." and "A.S." stand for in this inscription?

Are there variations of the regular runtimes of the Big-O-Notation?

Was there a contingency plan in place if Little Boy failed to detonate?

What does this quote in Small Gods refer to?

Why was wildfire not used during the Battle of Winterfell?

What does formal training in a field mean?

Was Mohammed the most popular first name for boys born in Berlin in 2018?

When quoting someone, is it proper to change "gotta" to "got to" without modifying the rest of the quote?

Exception propagation: When to catch exceptions?

Removing all characters except digits from clipboard

Cropping a message using array splits

No such column 'DeveloperName' on entity 'RecordType' after Summer '19 release on sandbox

What's the difference between const array and static const array in C/C++

date to display the EDT time

Series that evaluates to different values upon changing order of summation

Is this state of Earth possible, after humans left for a million years?

Ex-manager wants to stay in touch, I don't want to

What is the name of meteoroids which hit Moon, Mars, or pretty much anything that isn’t the Earth?

Thesis' "Future Work" section – is it acceptable to omit personal involvement in a mentioned project?

How can I avoid subordinates and coworkers leaving work until the last minute, then having no time for revisions?

Is it a Munchausen Number?

Remove color cast in darktable?



LSI 9260-8i limited to max 200 MB/s (independently from the drives)


MegaRaid Storage Manager Version to manage LSI 9260-8I through vSphere 4.1?for ZFS: passthru, I-T mode on LSI 9260-8i?LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?Configuring RAID 10 using MegaRAID Storage Manager on an LSI 9260 RAID cardPossible to build a RAID 1 with only one drive with LSI 9260-4i?Just installed LSI 9211; no drives showing up to LinuxLSI MegaRaid 9260-8i low performance in comparisonlsi megaraid 9260 single raid0 disk went unconfigured bad, please help recover dataRAID 10 with SSDs on LSI 9260-8i has bad performanceMigrate RAID 6 drives from LSI 3ware 9690SA-8I to LSI Megaraid 9271-8i






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








2















LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s









share|improve this question



















  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16

















2















LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s









share|improve this question



















  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16













2












2








2








LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s









share|improve this question
















LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s






bandwidth lsi






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 23 '15 at 5:24







Christopher Pereira

















asked Sep 22 '15 at 17:31









Christopher PereiraChristopher Pereira

113




113







  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16












  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16







2




2





There's no DL180 G7.

– ewwhite
Sep 22 '15 at 17:34





There's no DL180 G7.

– ewwhite
Sep 22 '15 at 17:34













What results do you get if you run 3 processes instead of 2?

– dtoubelis
Sep 22 '15 at 18:17





What results do you get if you run 3 processes instead of 2?

– dtoubelis
Sep 22 '15 at 18:17













@ewwhite, I fixed the question (server is a DL180 G6)

– Christopher Pereira
Sep 23 '15 at 5:10





@ewwhite, I fixed the question (server is a DL180 G6)

– Christopher Pereira
Sep 23 '15 at 5:10













@dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

– Christopher Pereira
Sep 23 '15 at 5:16





@dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

– Christopher Pereira
Sep 23 '15 at 5:16










3 Answers
3






active

oldest

votes


















0














This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer























  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57


















0














well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer























  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35


















0














Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer























  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f724075%2flsi-9260-8i-limited-to-max-200-mb-s-independently-from-the-drives%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer























  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57















0














This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer























  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57













0












0








0







This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer













This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 22 '15 at 18:18









drookiedrookie

6,16211219




6,16211219












  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57

















  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57
















The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

– Christopher Pereira
Sep 23 '15 at 5:35





The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

– Christopher Pereira
Sep 23 '15 at 5:35













Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

– Christopher Pereira
Sep 23 '15 at 5:37





Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

– Christopher Pereira
Sep 23 '15 at 5:37













I'm sure that it adds it's part at least. Even with one drive.

– drookie
Sep 23 '15 at 6:57





I'm sure that it adds it's part at least. Even with one drive.

– drookie
Sep 23 '15 at 6:57













0














well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer























  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35















0














well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer























  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35













0












0








0







well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer













well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 22 '15 at 18:31









user160004user160004

1614




1614












  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35

















  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35
















I added more info.

– Christopher Pereira
Sep 23 '15 at 5:24





I added more info.

– Christopher Pereira
Sep 23 '15 at 5:24













Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

– Christopher Pereira
Sep 23 '15 at 5:35





Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

– Christopher Pereira
Sep 23 '15 at 5:35











0














Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer























  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18















0














Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer























  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18













0












0








0







Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer













Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 22 '15 at 21:08









shodanshokshodanshok

27.3k34990




27.3k34990












  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18

















  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18
















Using bs=1M and count=8k gives the same results.

– Christopher Pereira
Sep 23 '15 at 5:18





Using bs=1M and count=8k gives the same results.

– Christopher Pereira
Sep 23 '15 at 5:18

















draft saved

draft discarded
















































Thanks for contributing an answer to Server Fault!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f724075%2flsi-9260-8i-limited-to-max-200-mb-s-independently-from-the-drives%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company