Unwanted md arrays created while creating SW RAID6 poolHow do I get mdadm to auto assemble my raid array?Linux Raid: mystical md_d devicemdadm raid5 failure. set wrong drive to faulty by accidentLinux: very slow mdadm resync on raid 1mdadm Stuck at rebuilding sparemdadm isn't rebuilding the arraymdadm: drive replacement shows up as spare and refuses to syncSoftware RAID 1 does not extend on two new additional drivesHow to fix my broken raid10 arraymdadm RAID6, recover 2 disk failure during reshape

Multi tool use
Multi tool use

Why is C# in the D Major Scale?

I may be a fraud

Returning the outputs of a nested structure

Is Cola "probably the best-known" Latin word in the world? If not, which might it be?

Can I get a paladin's steed by True Polymorphing into a monster that can cast Find Steed?

In Avengers 1, why does Thanos need Loki?

Does this article imply that Turing-Computability is not the same as "effectively computable"?

In a vacuum triode, what prevents the grid from acting as another anode?

Junior developer struggles: how to communicate with management?

Python password manager

A non-technological, repeating, phenomenon in the sky, holding its position in the sky for hours

In Endgame, why were these characters still around?

How to make a newline without autoindent

Moving the subject of the sentence into a dangling participle

How can I close a gap between my fence and my neighbor's that's on his side of the property line?

What is the unit of the area when geometry attributes are calculated in QGIS?

Type-check an expression

How did Arya get her dagger back from Sansa?

Should I replace my bicycle tires if they have not been inflated in multiple years

Do I really need diodes to receive MIDI?

Can fracking help reduce CO2?

Catholic vs Protestant Support for Nazism in Germany

Manager is threatning to grade me poorly if I don't complete the project

Missed the connecting flight, separate tickets on same airline - who is responsible?



Unwanted md arrays created while creating SW RAID6 pool


How do I get mdadm to auto assemble my raid array?Linux Raid: mystical md_d devicemdadm raid5 failure. set wrong drive to faulty by accidentLinux: very slow mdadm resync on raid 1mdadm Stuck at rebuilding sparemdadm isn't rebuilding the arraymdadm: drive replacement shows up as spare and refuses to syncSoftware RAID 1 does not extend on two new additional drivesHow to fix my broken raid10 arraymdadm RAID6, recover 2 disk failure during reshape






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.



I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.



Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.



I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.



Why are they there and how can i get rid of them?



md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md125 : inactive md8[0](S)
8790274048 blocks super 1.2

md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md126 : inactive md5[1](S)
8790274048 blocks super 1.2

md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md127 : inactive md7[2](S) md3[1](S)
17580548096 blocks super 1.2

md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md23 : inactive md2[1](S)
8790274048 blocks super 1.2

md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk


Commands to creat arrays just in case:



mdadm --zero-superblock
sgdisk -Z

mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]


Apparently
the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60



root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive

Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0

Number Major Minor RaidDevice

- 9 2 - /dev/md2

root@vod0-brn:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 2
Persistence : Superblock is persistent

State : inactive

Name : debian:25
UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
Events : 0

Number Major Minor RaidDevice

- 9 7 - /dev/md7
- 9 3 - /dev/md3
root@vod0-brn:~# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive

Name : debian:26
UUID : 52be5dac:b730c109:d2f36d64:a98fa836
Events : 0

Number Major Minor RaidDevice

- 9 5 - /dev/md5
root@vod0-brn:~# mdadm -D /dev/md125
/dev/md125:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive

Name : debian:28
UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
Events : 0

Number Major Minor RaidDevice

- 9 8 - /dev/md8
root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive

Name : vod0-brn:23 (local to host vod0-brn)
UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
Events : 0

Number Major Minor RaidDevice

- 9 2 - /dev/md2


After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution



root@vod0-brn:~# cat /dev/md 
md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9


mdadm --examine will still find them, even with different names, what a messs :( :



ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28









share|improve this question






























    0















    I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.



    I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.



    Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.



    I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.



    Why are they there and how can i get rid of them?



    md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md125 : inactive md8[0](S)
    8790274048 blocks super 1.2

    md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md126 : inactive md5[1](S)
    8790274048 blocks super 1.2

    md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md127 : inactive md7[2](S) md3[1](S)
    17580548096 blocks super 1.2

    md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md23 : inactive md2[1](S)
    8790274048 blocks super 1.2

    md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk

    md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
    11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
    bitmap: 22/22 pages [88KB], 65536KB chunk


    Commands to creat arrays just in case:



    mdadm --zero-superblock
    sgdisk -Z

    mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]


    Apparently
    the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60



    root@vod0-brn:~# mdadm -D /dev/md23
    /dev/md23:
    Version : 1.2
    Raid Level : raid0
    Total Devices : 1
    Persistence : Superblock is persistent

    State : inactive

    Name : vod0-brn:23 (local to host vod0-brn)
    UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
    Events : 0

    Number Major Minor RaidDevice

    - 9 2 - /dev/md2

    root@vod0-brn:~# mdadm -D /dev/md127
    /dev/md127:
    Version : 1.2
    Raid Level : raid0
    Total Devices : 2
    Persistence : Superblock is persistent

    State : inactive

    Name : debian:25
    UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
    Events : 0

    Number Major Minor RaidDevice

    - 9 7 - /dev/md7
    - 9 3 - /dev/md3
    root@vod0-brn:~# mdadm -D /dev/md126
    /dev/md126:
    Version : 1.2
    Raid Level : raid0
    Total Devices : 1
    Persistence : Superblock is persistent

    State : inactive

    Name : debian:26
    UUID : 52be5dac:b730c109:d2f36d64:a98fa836
    Events : 0

    Number Major Minor RaidDevice

    - 9 5 - /dev/md5
    root@vod0-brn:~# mdadm -D /dev/md125
    /dev/md125:
    Version : 1.2
    Raid Level : raid0
    Total Devices : 1
    Persistence : Superblock is persistent

    State : inactive

    Name : debian:28
    UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
    Events : 0

    Number Major Minor RaidDevice

    - 9 8 - /dev/md8
    root@vod0-brn:~# mdadm -D /dev/md23
    /dev/md23:
    Version : 1.2
    Raid Level : raid0
    Total Devices : 1
    Persistence : Superblock is persistent

    State : inactive

    Name : vod0-brn:23 (local to host vod0-brn)
    UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
    Events : 0

    Number Major Minor RaidDevice

    - 9 2 - /dev/md2


    After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution



    root@vod0-brn:~# cat /dev/md 
    md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9


    mdadm --examine will still find them, even with different names, what a messs :( :



    ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
    ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
    ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
    ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28









    share|improve this question


























      0












      0








      0








      I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.



      I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.



      Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.



      I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.



      Why are they there and how can i get rid of them?



      md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md125 : inactive md8[0](S)
      8790274048 blocks super 1.2

      md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md126 : inactive md5[1](S)
      8790274048 blocks super 1.2

      md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md127 : inactive md7[2](S) md3[1](S)
      17580548096 blocks super 1.2

      md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md23 : inactive md2[1](S)
      8790274048 blocks super 1.2

      md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk


      Commands to creat arrays just in case:



      mdadm --zero-superblock
      sgdisk -Z

      mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]


      Apparently
      the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60



      root@vod0-brn:~# mdadm -D /dev/md23
      /dev/md23:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : vod0-brn:23 (local to host vod0-brn)
      UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
      Events : 0

      Number Major Minor RaidDevice

      - 9 2 - /dev/md2

      root@vod0-brn:~# mdadm -D /dev/md127
      /dev/md127:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 2
      Persistence : Superblock is persistent

      State : inactive

      Name : debian:25
      UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
      Events : 0

      Number Major Minor RaidDevice

      - 9 7 - /dev/md7
      - 9 3 - /dev/md3
      root@vod0-brn:~# mdadm -D /dev/md126
      /dev/md126:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : debian:26
      UUID : 52be5dac:b730c109:d2f36d64:a98fa836
      Events : 0

      Number Major Minor RaidDevice

      - 9 5 - /dev/md5
      root@vod0-brn:~# mdadm -D /dev/md125
      /dev/md125:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : debian:28
      UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
      Events : 0

      Number Major Minor RaidDevice

      - 9 8 - /dev/md8
      root@vod0-brn:~# mdadm -D /dev/md23
      /dev/md23:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : vod0-brn:23 (local to host vod0-brn)
      UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
      Events : 0

      Number Major Minor RaidDevice

      - 9 2 - /dev/md2


      After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution



      root@vod0-brn:~# cat /dev/md 
      md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9


      mdadm --examine will still find them, even with different names, what a messs :( :



      ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
      ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
      ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
      ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28









      share|improve this question
















      I ran into issue where while i am creating bunch of RAID6 arrays on storage server, some unwanted random arrays were created with them for no reason.



      I am using old disks but i ran mdadm --zero-superblock on all of them also with sgdisk -Z. Afer that mdadm --examine didnt find any array and after reboot there was also none. Disks were previously used in RAID50 arrangement.



      Here is /proc/mdadm output. You can see md125..127 and completely random md23 that are for some reason created from still assembling new RAID6 array.



      I would assume its possibly some old data from previous SW raid configuration, but as i said i wiped the disks and there was no trace of any arrays after that.



      Why are they there and how can i get rid of them?



      md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md125 : inactive md8[0](S)
      8790274048 blocks super 1.2

      md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md126 : inactive md5[1](S)
      8790274048 blocks super 1.2

      md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md127 : inactive md7[2](S) md3[1](S)
      17580548096 blocks super 1.2

      md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md23 : inactive md2[1](S)
      8790274048 blocks super 1.2

      md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

      md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................] resync = 1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk


      Commands to creat arrays just in case:



      mdadm --zero-superblock
      sgdisk -Z

      mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]


      Apparently
      the system is somehow trying to add newly crated arrays to RAID0 from previous configuration. But where are the data about it about it stored? So i can wipe it clean and create brand new RAID60



      root@vod0-brn:~# mdadm -D /dev/md23
      /dev/md23:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : vod0-brn:23 (local to host vod0-brn)
      UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
      Events : 0

      Number Major Minor RaidDevice

      - 9 2 - /dev/md2

      root@vod0-brn:~# mdadm -D /dev/md127
      /dev/md127:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 2
      Persistence : Superblock is persistent

      State : inactive

      Name : debian:25
      UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
      Events : 0

      Number Major Minor RaidDevice

      - 9 7 - /dev/md7
      - 9 3 - /dev/md3
      root@vod0-brn:~# mdadm -D /dev/md126
      /dev/md126:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : debian:26
      UUID : 52be5dac:b730c109:d2f36d64:a98fa836
      Events : 0

      Number Major Minor RaidDevice

      - 9 5 - /dev/md5
      root@vod0-brn:~# mdadm -D /dev/md125
      /dev/md125:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : debian:28
      UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
      Events : 0

      Number Major Minor RaidDevice

      - 9 8 - /dev/md8
      root@vod0-brn:~# mdadm -D /dev/md23
      /dev/md23:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 1
      Persistence : Superblock is persistent

      State : inactive

      Name : vod0-brn:23 (local to host vod0-brn)
      UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
      Events : 0

      Number Major Minor RaidDevice

      - 9 2 - /dev/md2


      After i mdadm --stop /dev/md** them, they aren o longer present in /proc/mdstat, but they are still present in the system which i dont like very much. Its just a half-solution



      root@vod0-brn:~# cat /dev/md 
      md/ md0 md1 md125 md126 md127 md2 md23 md29 md3 md4 md5 md6 md7 md8 md9


      mdadm --examine will still find them, even with different names, what a messs :( :



      ARRAY /dev/md/23 metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
      ARRAY /dev/md/26 metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
      ARRAY /dev/md/25 metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
      ARRAY /dev/md/28 metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28






      raid mdadm






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 23 at 10:23







      J B

















      asked Apr 23 at 10:07









      J BJ B

      387




      387




















          1 Answer
          1






          active

          oldest

          votes


















          0














          It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2 is created, the system detects a RAID0 on that device and creates /dev/md23.



          In this case it would be best to add a line to your /etc/mdadm/mdadm.conf file:



          DEVICE /dev/sd*


          Now the system should only consider /dev/sd* devices when trying to assemble existing MD raid devices.






          share|improve this answer























          • Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

            – J B
            Apr 23 at 11:33












          • If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

            – wurtel
            Apr 23 at 11:36











          • i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

            – J B
            Apr 23 at 11:52












          • Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

            – wurtel
            Apr 23 at 12:01











          • i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

            – J B
            Apr 23 at 12:08











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f964188%2funwanted-md-arrays-created-while-creating-sw-raid6-pool%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2 is created, the system detects a RAID0 on that device and creates /dev/md23.



          In this case it would be best to add a line to your /etc/mdadm/mdadm.conf file:



          DEVICE /dev/sd*


          Now the system should only consider /dev/sd* devices when trying to assemble existing MD raid devices.






          share|improve this answer























          • Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

            – J B
            Apr 23 at 11:33












          • If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

            – wurtel
            Apr 23 at 11:36











          • i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

            – J B
            Apr 23 at 11:52












          • Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

            – wurtel
            Apr 23 at 12:01











          • i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

            – J B
            Apr 23 at 12:08















          0














          It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2 is created, the system detects a RAID0 on that device and creates /dev/md23.



          In this case it would be best to add a line to your /etc/mdadm/mdadm.conf file:



          DEVICE /dev/sd*


          Now the system should only consider /dev/sd* devices when trying to assemble existing MD raid devices.






          share|improve this answer























          • Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

            – J B
            Apr 23 at 11:33












          • If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

            – wurtel
            Apr 23 at 11:36











          • i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

            – J B
            Apr 23 at 11:52












          • Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

            – wurtel
            Apr 23 at 12:01











          • i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

            – J B
            Apr 23 at 12:08













          0












          0








          0







          It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2 is created, the system detects a RAID0 on that device and creates /dev/md23.



          In this case it would be best to add a line to your /etc/mdadm/mdadm.conf file:



          DEVICE /dev/sd*


          Now the system should only consider /dev/sd* devices when trying to assemble existing MD raid devices.






          share|improve this answer













          It looks like there were MD raid devices created on top of other MD raid devices, which is why once /dev/md2 is created, the system detects a RAID0 on that device and creates /dev/md23.



          In this case it would be best to add a line to your /etc/mdadm/mdadm.conf file:



          DEVICE /dev/sd*


          Now the system should only consider /dev/sd* devices when trying to assemble existing MD raid devices.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 23 at 11:16









          wurtelwurtel

          3,038512




          3,038512












          • Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

            – J B
            Apr 23 at 11:33












          • If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

            – wurtel
            Apr 23 at 11:36











          • i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

            – J B
            Apr 23 at 11:52












          • Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

            – wurtel
            Apr 23 at 12:01











          • i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

            – J B
            Apr 23 at 12:08

















          • Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

            – J B
            Apr 23 at 11:33












          • If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

            – wurtel
            Apr 23 at 11:36











          • i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

            – J B
            Apr 23 at 11:52












          • Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

            – wurtel
            Apr 23 at 12:01











          • i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

            – J B
            Apr 23 at 12:08
















          Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

          – J B
          Apr 23 at 11:33






          Thanks, but i want to buld RAID0 on top of my RAID6 as well, so i wondere where the system read info abou previously existing RAIDS on top of RAIDs so i can just wipe it. Its remembering for some reason only few RAIDs. I wiped disks left and right with dd, wipefs, zerosueprblock, sgdisk....but nothing helped. Still got 3 unwanted arrays assembeled. This time they are named like the old ones md23 md26 and md27. Any idea?

          – J B
          Apr 23 at 11:33














          If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

          – wurtel
          Apr 23 at 11:36





          If you know what MD devices you want to use for your RAID0, add those to the DEVICE line. Otherwise you need to do mdadm --zerosuperblock on the assemlbed RAID6 devices.

          – wurtel
          Apr 23 at 11:36













          i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

          – J B
          Apr 23 at 11:52






          i get this root@vod0-brn:~# mdadm --zero-superblock /dev/md5 mdadm: Couldn't open /dev/md5 for write - not zeroing , is it due to assemble in progress? should i wait after assemble is done, than mabye stop it, zero it and start it?

          – J B
          Apr 23 at 11:52














          Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

          – wurtel
          Apr 23 at 12:01





          Might be, although it's usually possible to write to an MD device while syncing is active. Maybe it'll end up zeroing the MD device components itself :( Otherwise you don't have much options left besides completely filling the devices with dd if=/dev/zero of=/dev/sdxx bs=1024k...

          – wurtel
          Apr 23 at 12:01













          i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

          – J B
          Apr 23 at 12:08





          i did 'dd if=/dev/zero of=/dev/sdbn bs=1 count=512' but didnt help, that deleted MBR and PT. With sgdisk i deleted GPT, wipefs -a filesystem...still no luck. It would be painfull to dd all 60 3TB disks :( Good i can experiment on this server, but as you said iam running out of solutions

          – J B
          Apr 23 at 12:08

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f964188%2funwanted-md-arrays-created-while-creating-sw-raid6-pool%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          umaq5,gmaU JCihzuTyi F,oyHiTI615OFK3WGyon
          r0qKU dkw2YDDmn m,1M98mcIhyiLbEt,LGDby37 SuDzlFH

          Popular posts from this blog

          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

          Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020