Unwanted md arrays created while creating SW RAID6 poolHow do I get mdadm to auto assemble my raid array?Linux Raid: mystical md_d devicemdadm raid5 failure. set wrong drive to faulty by accidentLinux: very slow mdadm resync on raid 1mdadm Stuck at rebuilding sparemdadm isn't rebuilding the arraymdadm: drive replacement shows up as spare and refuses to syncSoftware RAID 1 does not extend on two new additional drivesHow to fix my broken raid10 arraymdadm RAID6, recover 2 disk failure during reshape
Multi tool use
Why is C# in the D Major Scale?
I may be a fraud
Returning the outputs of a nested structure
Is Cola "probably the best-known" Latin word in the world? If not, which might it be?
Can I get a paladin's steed by True Polymorphing into a monster that can cast Find Steed?
In Avengers 1, why does Thanos need Loki?
Does this article imply that Turing-Computability is not the same as "effectively computable"?
In a vacuum triode, what prevents the grid from acting as another anode?
Junior developer struggles: how to communicate with management?
Python password manager
A non-technological, repeating, phenomenon in the sky, holding its position in the sky for hours
In Endgame, why were these characters still around?
How to make a newline without autoindent
Moving the subject of the sentence into a dangling participle
How can I close a gap between my fence and my neighbor's that's on his side of the property line?
What is the unit of the area when geometry attributes are calculated in QGIS?
Type-check an expression
How did Arya get her dagger back from Sansa?
Should I replace my bicycle tires if they have not been inflated in multiple years
Do I really need diodes to receive MIDI?
Can fracking help reduce CO2?
Catholic vs Protestant Support for Nazism in Germany
Manager is threatning to grade me poorly if I don't complete the project
Missed the connecting flight, separate tickets on same airline - who is responsible?
Unwanted md arrays created while creating SW RAID6 pool
How do I get mdadm to auto assemble my raid array?Linux Raid: mystical md_d devicemdadm raid5 failure. set wrong drive to faulty by accidentLinux: very slow mdadm resync on raid 1mdadm Stuck at rebuilding sparemdadm isn't rebuilding the arraymdadm: drive replacement shows up as spare and refuses to syncSoftware RAID 1 does not extend on two new additional drivesHow to fix my broken raid10 arraymdadm RAID6, recover 2 disk failure during reshape
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.
I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.
Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.
I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.
Why are they there and how can i get rid of them?
md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md125 : inactive md8[0](S)
8790274048 blocks super 1.2
md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md126 : inactive md5[1](S)
8790274048 blocks super 1.2
md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md127 : inactive md7[2](S) md3[1](S)
17580548096 blocks super 1.2
md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md23 : inactive md2[1](S)
8790274048 blocks super 1.2
md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
Commands to creat arrays just in case:
mdadm --zero-superblock
sgdisk -Z
mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]
Apparently
the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
root@vod0-brn:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 2
Persistence : Superblock is persistent
State : inactive
Name : debian:25
UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
Events : 0
Number Major Minor RaidDevice
- 9 7 - /dev/md7
- 9 3 - /dev/md3
root@vod0-brn:~# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:26
UUID : 52be5dac:b730c109:d2f36d64:a98fa836
Events : 0
Number Major Minor RaidDevice
- 9 5 - /dev/md5
root@vod0-brn:~# mdadm -D /dev/md125
/dev/md125:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:28
UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
Events : 0
Number Major Minor RaidDevice
- 9 8 - /dev/md8
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution
root@vod0-brn:~# cat /dev/md
md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9
mdadm --examine will still find them, even with different names, what a messs :( :
ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28
raid mdadm
add a comment |
I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.
I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.
Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.
I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.
Why are they there and how can i get rid of them?
md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md125 : inactive md8[0](S)
8790274048 blocks super 1.2
md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md126 : inactive md5[1](S)
8790274048 blocks super 1.2
md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md127 : inactive md7[2](S) md3[1](S)
17580548096 blocks super 1.2
md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md23 : inactive md2[1](S)
8790274048 blocks super 1.2
md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
Commands to creat arrays just in case:
mdadm --zero-superblock
sgdisk -Z
mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]
Apparently
the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
root@vod0-brn:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 2
Persistence : Superblock is persistent
State : inactive
Name : debian:25
UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
Events : 0
Number Major Minor RaidDevice
- 9 7 - /dev/md7
- 9 3 - /dev/md3
root@vod0-brn:~# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:26
UUID : 52be5dac:b730c109:d2f36d64:a98fa836
Events : 0
Number Major Minor RaidDevice
- 9 5 - /dev/md5
root@vod0-brn:~# mdadm -D /dev/md125
/dev/md125:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:28
UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
Events : 0
Number Major Minor RaidDevice
- 9 8 - /dev/md8
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution
root@vod0-brn:~# cat /dev/md
md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9
mdadm --examine will still find them, even with different names, what a messs :( :
ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28
raid mdadm
add a comment |
I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.
I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.
Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.
I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.
Why are they there and how can i get rid of them?
md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md125 : inactive md8[0](S)
8790274048 blocks super 1.2
md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md126 : inactive md5[1](S)
8790274048 blocks super 1.2
md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md127 : inactive md7[2](S) md3[1](S)
17580548096 blocks super 1.2
md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md23 : inactive md2[1](S)
8790274048 blocks super 1.2
md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
Commands to creat arrays just in case:
mdadm --zero-superblock
sgdisk -Z
mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]
Apparently
the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
root@vod0-brn:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 2
Persistence : Superblock is persistent
State : inactive
Name : debian:25
UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
Events : 0
Number Major Minor RaidDevice
- 9 7 - /dev/md7
- 9 3 - /dev/md3
root@vod0-brn:~# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:26
UUID : 52be5dac:b730c109:d2f36d64:a98fa836
Events : 0
Number Major Minor RaidDevice
- 9 5 - /dev/md5
root@vod0-brn:~# mdadm -D /dev/md125
/dev/md125:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:28
UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
Events : 0
Number Major Minor RaidDevice
- 9 8 - /dev/md8
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution
root@vod0-brn:~# cat /dev/md
md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9
mdadm --examine will still find them, even with different names, what a messs :( :
ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28
raid mdadm
I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.
I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.
Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.
I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.
Why are they there and how can i get rid of them?
md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md125 : inactive md8[0](S)
8790274048 blocks super 1.2
md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md126 : inactive md5[1](S)
8790274048 blocks super 1.2
md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md127 : inactive md7[2](S) md3[1](S)
17580548096 blocks super 1.2
md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md23 : inactive md2[1](S)
8790274048 blocks super 1.2
md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
Commands to creat arrays just in case:
mdadm --zero-superblock
sgdisk -Z
mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]
Apparently
the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
root@vod0-brn:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 2
Persistence : Superblock is persistent
State : inactive
Name : debian:25
UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
Events : 0
Number Major Minor RaidDevice
- 9 7 - /dev/md7
- 9 3 - /dev/md3
root@vod0-brn:~# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:26
UUID : 52be5dac:b730c109:d2f36d64:a98fa836
Events : 0
Number Major Minor RaidDevice
- 9 5 - /dev/md5
root@vod0-brn:~# mdadm -D /dev/md125
/dev/md125:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : debian:28
UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
Events : 0
Number Major Minor RaidDevice
- 9 8 - /dev/md8
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0
Number Major Minor RaidDevice
- 9 2 - /dev/md2
After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution
root@vod0-brn:~# cat /dev/md
md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9
mdadm --examine will still find them, even with different names, what a messs :( :
ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28
raid mdadm
raid mdadm
edited Apr 23 at 10:23
J B
asked Apr 23 at 10:07
J BJ B
387
387
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2
is created, the system detects a RAID0 on that device and creates /dev/md23
.
In this case it would be best to add a line to your /etc/mdadm/mdadm.conf
file:
DEVICE /dev/sd*
Now the system should only consider /dev/sd*
devices when trying to assemble existing MD raid devices.
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to domdadm --zerosuperblock
on the assemlbed RAID6 devices.
– wurtel
Apr 23 at 11:36
i get thisroot@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?
– J B
Apr 23 at 11:52
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices withdd if=/dev/zero of=/dev/sdxx bs=1024k
...
– wurtel
Apr 23 at 12:01
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
|
show 2 more comments
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f964188%2funwanted-md-arrays-created-while-creating-sw-raid6-pool%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2
is created, the system detects a RAID0 on that device and creates /dev/md23
.
In this case it would be best to add a line to your /etc/mdadm/mdadm.conf
file:
DEVICE /dev/sd*
Now the system should only consider /dev/sd*
devices when trying to assemble existing MD raid devices.
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to domdadm --zerosuperblock
on the assemlbed RAID6 devices.
– wurtel
Apr 23 at 11:36
i get thisroot@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?
– J B
Apr 23 at 11:52
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices withdd if=/dev/zero of=/dev/sdxx bs=1024k
...
– wurtel
Apr 23 at 12:01
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
|
show 2 more comments
It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2
is created, the system detects a RAID0 on that device and creates /dev/md23
.
In this case it would be best to add a line to your /etc/mdadm/mdadm.conf
file:
DEVICE /dev/sd*
Now the system should only consider /dev/sd*
devices when trying to assemble existing MD raid devices.
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to domdadm --zerosuperblock
on the assemlbed RAID6 devices.
– wurtel
Apr 23 at 11:36
i get thisroot@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?
– J B
Apr 23 at 11:52
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices withdd if=/dev/zero of=/dev/sdxx bs=1024k
...
– wurtel
Apr 23 at 12:01
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
|
show 2 more comments
It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2
is created, the system detects a RAID0 on that device and creates /dev/md23
.
In this case it would be best to add a line to your /etc/mdadm/mdadm.conf
file:
DEVICE /dev/sd*
Now the system should only consider /dev/sd*
devices when trying to assemble existing MD raid devices.
It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2
is created, the system detects a RAID0 on that device and creates /dev/md23
.
In this case it would be best to add a line to your /etc/mdadm/mdadm.conf
file:
DEVICE /dev/sd*
Now the system should only consider /dev/sd*
devices when trying to assemble existing MD raid devices.
answered Apr 23 at 11:16
wurtelwurtel
3,038512
3,038512
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to domdadm --zerosuperblock
on the assemlbed RAID6 devices.
– wurtel
Apr 23 at 11:36
i get thisroot@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?
– J B
Apr 23 at 11:52
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices withdd if=/dev/zero of=/dev/sdxx bs=1024k
...
– wurtel
Apr 23 at 12:01
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
|
show 2 more comments
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to domdadm --zerosuperblock
on the assemlbed RAID6 devices.
– wurtel
Apr 23 at 11:36
i get thisroot@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?
– J B
Apr 23 at 11:52
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices withdd if=/dev/zero of=/dev/sdxx bs=1024k
...
– wurtel
Apr 23 at 12:01
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?
– J B
Apr 23 at 11:33
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do
mdadm --zerosuperblock
on the assemlbed RAID6 devices.– wurtel
Apr 23 at 11:36
If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do
mdadm --zerosuperblock
on the assemlbed RAID6 devices.– wurtel
Apr 23 at 11:36
i get this
root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?– J B
Apr 23 at 11:52
i get this
root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing
, is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?– J B
Apr 23 at 11:52
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with
dd if=/dev/zero of=/dev/sdxx bs=1024k
...– wurtel
Apr 23 at 12:01
Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with
dd if=/dev/zero of=/dev/sdxx bs=1024k
...– wurtel
Apr 23 at 12:01
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions
– J B
Apr 23 at 12:08
|
show 2 more comments
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f964188%2funwanted-md-arrays-created-while-creating-sw-raid6-pool%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
umaq5,gmaU JCihzuTyi F,oyHiTI615OFK3WGyon