LSI 9260-8i limited to max 200 MB/s (independently from the drives)MegaRaid Storage Manager Version to manage LSI 9260-8I through vSphere 4.1?for ZFS: passthru, I-T mode on LSI 9260-8i?LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?Configuring RAID 10 using MegaRAID Storage Manager on an LSI 9260 RAID cardPossible to build a RAID 1 with only one drive with LSI 9260-4i?Just installed LSI 9211; no drives showing up to LinuxLSI MegaRaid 9260-8i low performance in comparisonlsi megaraid 9260 single raid0 disk went unconfigured bad, please help recover dataRAID 10 with SSDs on LSI 9260-8i has bad performanceMigrate RAID 6 drives from LSI 3ware 9690SA-8I to LSI Megaraid 9271-8i

Windows OS quantum vs. SQL OS Quantum

Passport stamps art, can it be done?

What do "KAL." and "A.S." stand for in this inscription?

Are there variations of the regular runtimes of the Big-O-Notation?

Was there a contingency plan in place if Little Boy failed to detonate?

What does this quote in Small Gods refer to?

Why was wildfire not used during the Battle of Winterfell?

What does formal training in a field mean?

Was Mohammed the most popular first name for boys born in Berlin in 2018?

When quoting someone, is it proper to change "gotta" to "got to" without modifying the rest of the quote?

Exception propagation: When to catch exceptions?

Removing all characters except digits from clipboard

Cropping a message using array splits

No such column 'DeveloperName' on entity 'RecordType' after Summer '19 release on sandbox

What's the difference between const array and static const array in C/C++

date to display the EDT time

Series that evaluates to different values upon changing order of summation

Is this state of Earth possible, after humans left for a million years?

Ex-manager wants to stay in touch, I don't want to

What is the name of meteoroids which hit Moon, Mars, or pretty much anything that isn’t the Earth?

Thesis' "Future Work" section – is it acceptable to omit personal involvement in a mentioned project?

How can I avoid subordinates and coworkers leaving work until the last minute, then having no time for revisions?

Is it a Munchausen Number?

Remove color cast in darktable?



LSI 9260-8i limited to max 200 MB/s (independently from the drives)


MegaRaid Storage Manager Version to manage LSI 9260-8I through vSphere 4.1?for ZFS: passthru, I-T mode on LSI 9260-8i?LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?Configuring RAID 10 using MegaRAID Storage Manager on an LSI 9260 RAID cardPossible to build a RAID 1 with only one drive with LSI 9260-4i?Just installed LSI 9211; no drives showing up to LinuxLSI MegaRaid 9260-8i low performance in comparisonlsi megaraid 9260 single raid0 disk went unconfigured bad, please help recover dataRAID 10 with SSDs on LSI 9260-8i has bad performanceMigrate RAID 6 drives from LSI 3ware 9690SA-8I to LSI Megaraid 9271-8i






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








2















LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s









share|improve this question



















  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16

















2















LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s









share|improve this question



















  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16













2












2








2








LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s









share|improve this question
















LSI MegaRAID 9260-8i controller is limited to max 200 MB/s transfer rates.
The server is a HP DL180 G6, with CentOS 7 (64 bits) and we are testing 4TB SAS drives (Model: WD4001FYYG).
The controller is using iBBU08 (512 cache).
We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.



According to our tests, writing concurrently to two different virtual disks (a RAID10 drive of 6 disks and a RAID0 drive of a single disk) we get max. 200 MB/s when reading and max. 200 MB/s when writing.



We verified that the performance decreases when operating concurrently on a different drive because the bandwith (aprox. 200 MB/s) is shared among different independent operating disk drives operations (bottleneck).



Conclusion:



The LSI controller is limiting the bandwidth to max 200 MB/s.



Why is this happening?
How can we fix it?
May it be related with the PCI card?
Can we measure the transfer rate?



PS: Issue was filed in support ticket SR # P00117431, but we stopped getting answers from AVAGOTECH (ex- LSI) after sending them detailed info.



Thanks



This are our IO tests:



--- 1) Single drive IO test ---



Write test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 46.7041 s, 184 MB/s


Read test:



# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync

1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 47.1691 s, 182 MB/s


--- 2) Two drives concurrent IO tests ---



We will repeat the previous test, but running the same IO operations on a second independent drive at the same time.
As a result, the same drive now only performs 50%, which proves that the IO's on the second drive (/mnt/sdb/test) are sharing some limited resources on the LSI controller.



Write test:



Process 1:



[root@hp ~]# sync
[root@hp ~]# echo 3 > /proc/sys/vm/drop_caches
[root@hp ~]# dd if=/dev/zero of=/tmp/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 87.8613 s, 97.8 MB/s


Process 2:



[root@hp ~]# dd if=/dev/zero of=/mnt/sdb/test bs=8k count=1M conv=fsync
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 86.3504 s, 99.5 MB/s


Read test:



Process 1:



[root@hp ~]# dd if=/tmp/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 81.5574 s, 105 MB/s


Process 2:



[root@hp ~]# dd if=/mnt/sdb/test of=/dev/null bs=8k count=1M
1048576+0 records in
1048576+0 records out
8589934592 bytes (8.6 GB) copied, 84.2258 s, 102 MB/s






bandwidth lsi






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 23 '15 at 5:24







Christopher Pereira

















asked Sep 22 '15 at 17:31









Christopher PereiraChristopher Pereira

113




113







  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16












  • 2





    There's no DL180 G7.

    – ewwhite
    Sep 22 '15 at 17:34











  • What results do you get if you run 3 processes instead of 2?

    – dtoubelis
    Sep 22 '15 at 18:17











  • @ewwhite, I fixed the question (server is a DL180 G6)

    – Christopher Pereira
    Sep 23 '15 at 5:10











  • @dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

    – Christopher Pereira
    Sep 23 '15 at 5:16







2




2





There's no DL180 G7.

– ewwhite
Sep 22 '15 at 17:34





There's no DL180 G7.

– ewwhite
Sep 22 '15 at 17:34













What results do you get if you run 3 processes instead of 2?

– dtoubelis
Sep 22 '15 at 18:17





What results do you get if you run 3 processes instead of 2?

– dtoubelis
Sep 22 '15 at 18:17













@ewwhite, I fixed the question (server is a DL180 G6)

– Christopher Pereira
Sep 23 '15 at 5:10





@ewwhite, I fixed the question (server is a DL180 G6)

– Christopher Pereira
Sep 23 '15 at 5:10













@dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

– Christopher Pereira
Sep 23 '15 at 5:16





@dtoubelis, I used 2 processes to write to 2 independent drives. I don't have a third drive, but tried with 3 processes writing to 2 drives (2 procs writing to the same drive). Result: we get about 1/3 of the original performance (there is still the 200 MB/s limit).

– Christopher Pereira
Sep 23 '15 at 5:16










3 Answers
3






active

oldest

votes


















0














This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer























  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57


















0














well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer























  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35


















0














Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer























  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f724075%2flsi-9260-8i-limited-to-max-200-mb-s-independently-from-the-drives%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer























  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57















0














This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer























  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57













0












0








0







This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.






share|improve this answer













This can easily happen because of the 4k sectors on the drive are 512e (you didn't specify the drive model, so it's a bit of the wild guess, but considering it's the 4Tb drive, I'd say it's Advanced Format). So I would check if your OS is aware about the size of your sector, unless you want read-modify-write cycling. This means proper partition alignment and the block size of the filesystems you are using.



And yeah, there's no such thing as HP DL 180 G7, gen6 was the last, then the model index changed from 180.



Just in case, there's pretty decent article for you (yeah, you are using CentOS, but... you know, it's basically the same stuff when it comes to internals).



Another thing you should probaby check and enable - it's the controller write cache, if you have a BBU, of course.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 22 '15 at 18:18









drookiedrookie

6,16211219




6,16211219












  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57

















  • The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

    – Christopher Pereira
    Sep 23 '15 at 5:35











  • Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

    – Christopher Pereira
    Sep 23 '15 at 5:37











  • I'm sure that it adds it's part at least. Even with one drive.

    – drookie
    Sep 23 '15 at 6:57
















The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

– Christopher Pereira
Sep 23 '15 at 5:35





The controller is using iBBU08 (512 cache). We have tested enabling/disabling cache and direct I/O, but it doesn't solve the problem.

– Christopher Pereira
Sep 23 '15 at 5:35













Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

– Christopher Pereira
Sep 23 '15 at 5:37





Do you think the partition format is related with the max. bandwith problem? Please note that IO performance is fine when operating with only one array/drive at once.

– Christopher Pereira
Sep 23 '15 at 5:37













I'm sure that it adds it's part at least. Even with one drive.

– drookie
Sep 23 '15 at 6:57





I'm sure that it adds it's part at least. Even with one drive.

– drookie
Sep 23 '15 at 6:57













0














well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer























  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35















0














well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer























  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35













0












0








0







well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.






share|improve this answer













well there's really not enough info here yet.



First, what are the drives? Model and SAS version supported?



Second, are you writing to the array from another drive or array? Or writing and reading to and from the same array? If you're keeping it all in teh same array then you're splitting the available IO for the drives themselves in half(at best) regardless of the fact that SAS is full duplex if the data is distributed across the same disks and you're reading/writing each disk its own limitation on what it can handle from a disk operations standpoint...



Also, if you're reading or writing back and forth from the one Raid0 to the RAID10... then you're bottleneck is the single drive. You'll only ever get the max speed that one drive can handle. Which by the way 200MB(1.6 Gbs roughly) isn't bad for a single HDD.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 22 '15 at 18:31









user160004user160004

1614




1614












  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35

















  • I added more info.

    – Christopher Pereira
    Sep 23 '15 at 5:24











  • Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

    – Christopher Pereira
    Sep 23 '15 at 5:35
















I added more info.

– Christopher Pereira
Sep 23 '15 at 5:24





I added more info.

– Christopher Pereira
Sep 23 '15 at 5:24













Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

– Christopher Pereira
Sep 23 '15 at 5:35





Model is "WD4001FYYG" (SAS 6 Gb/s). I'm first writing (from memory) and then reading (to memory) from the same array. I'm not reading and writing at the same. For test #2, please note that I'm operating on different arrays/drives at the same time. I understand that the max. controller bandwitdh will be shared, but a max. 200 MB/s is too low for all arrays/drives handled by the controller. And yes, 200 or 180 MB/s for a single drive is ok, but not for a RAID10 of 6 disks.

– Christopher Pereira
Sep 23 '15 at 5:35











0














Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer























  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18















0














Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer























  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18













0












0








0







Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?






share|improve this answer













Maybe you are hitting the max IOPS delivered by your card: with 8K packets, writing/reading at 200 MB/s means a sunstained rate of about 25K IOPS.



Can you re-try with larger packets (ie: use bs=1M or similar)? Does it change anything?







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 22 '15 at 21:08









shodanshokshodanshok

27.3k34990




27.3k34990












  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18

















  • Using bs=1M and count=8k gives the same results.

    – Christopher Pereira
    Sep 23 '15 at 5:18
















Using bs=1M and count=8k gives the same results.

– Christopher Pereira
Sep 23 '15 at 5:18





Using bs=1M and count=8k gives the same results.

– Christopher Pereira
Sep 23 '15 at 5:18

















draft saved

draft discarded
















































Thanks for contributing an answer to Server Fault!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f724075%2flsi-9260-8i-limited-to-max-200-mb-s-independently-from-the-drives%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020