Quantcast
Channel: Intel Communities : Discussion List - Chipsets
Viewing all articles
Browse latest Browse all 3841

Raid Data Setup in Linux (Also supported in Windows)

$
0
0

I have an Asus z170 motherboard with two 3TB data disks in Raid 1,  configured using the bios setting "Raid", so Intel raid in effect.  The IRST application in Windows shows the array and I can read and write to it.  But in Linux (Ubuntu 15.10) I can see in the file manager two separate 3TB disks and if I write to either one of them the file doesn't survive...that is I can't see it in Windows.  I can read the files but not write to the array.  Somehow the array does not seem properly setup in Linux.  As this deals with using Raid from the Intel chipset I thought I might ask the question here - how to setup raid in Linux using the z170, but without destroying the existing raid array that Windows sees?

 

If I examine my 2 drives with "mdadm --examine" I get:

 

>sudo mdadm --examine /dev/sdc

 

/dev/sdc:

          Magic : Intel Raid ISM Cfg Sig.

        Version : 1.3.00

    Orig Family : b8dc9faf

         Family : b8dc9faf

     Generation : 000028f5

     Attributes : All supported

           UUID : 0e6a3741:f6666efc:3b3f793a:1b7fa58a

       Checksum : ff687ac8 correct

    MPB Sectors : 1

          Disks : 2

   RAID Devices : 1

 

  Disk00 Serial : WD-WCC4N5VX7AFE

          State : active

             Id : 00000004

    Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

 

[WD3TB_Raid]:

           UUID : e1fb2dad:506bcbfb:18aee43c:7e59c40f

     RAID Level : 1

        Members : 2

          Slots : [UU]

    Failed disk : none

      This Slot : 0

     Array Size : 5860528128 (2794.52 GiB 3000.59 GB)

   Per Dev Size : 5860528392 (2794.52 GiB 3000.59 GB)

  Sector Offset : 0

    Num Stripes : 22892688

     Chunk Size : 64 KiB

       Reserved : 0

  Migrate State : idle

      Map State : normal

    Dirty State : clean

 

  Disk01 Serial : WD-WCC4N1HND0JN

          State : active

             Id : 00000005

    Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

 

and for the other drive:

sudo mdadm --examine /dev/sdd

 

/dev/sdd:

          Magic : Intel Raid ISM Cfg Sig.

        Version : 1.3.00

    Orig Family : b8dc9faf

         Family : b8dc9faf

     Generation : 000028f5

     Attributes : All supported

           UUID : 0e6a3741:f6666efc:3b3f793a:1b7fa58a

       Checksum : ff687ac8 correct

    MPB Sectors : 1

          Disks : 2

   RAID Devices : 1

 

  Disk01 Serial : WD-WCC4N1HND0JN

          State : active

             Id : 00000005

    Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

 

[WD3TB_Raid]:

           UUID : e1fb2dad:506bcbfb:18aee43c:7e59c40f

     RAID Level : 1

        Members : 2

          Slots : [UU]

    Failed disk : none

      This Slot : 1

     Array Size : 5860528128 (2794.52 GiB 3000.59 GB)

   Per Dev Size : 5860528392 (2794.52 GiB 3000.59 GB)

  Sector Offset : 0

    Num Stripes : 22892688

     Chunk Size : 64 KiB

       Reserved : 0

  Migrate State : idle

      Map State : normal

    Dirty State : clean

 

  Disk00 Serial : WD-WCC4N5VX7AFE

          State : active

             Id : 00000004

    Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

 

 

 

Also from the contents of file "/etc/mdadm.com" I have as follows:

 

# mdadm.conf

#

# Please refer to mdadm.conf(5) for information about this file.

#

 

# by default (built-in), scan all partitions (/proc/partitions) and all

# containers for MD superblocks. alternatively, specify devices to scan, using

# wildcards if desired.

#DEVICE partitions containers

 

# auto-create devices with Debian standard permissions

CREATE owner=root group=disk mode=0660 auto=yes

 

# automatically tag new arrays as belonging to the local system

HOMEHOST <system>

 

# instruct the monitoring daemon where to send mail alerts

MAILADDR root

 

# definitions of existing MD arrays

ARRAY metadata=imsm UUID=0e6a3741:f6666efc:3b3f793a:1b7fa58a

ARRAY /dev/md/WD3TB_Raid container=0e6a3741:f6666efc:3b3f793a:1b7fa58a member=0 UUID=e1fb2dad:506bcbfb:18aee43c:7e59c40f

 

# This file was auto-generated on Sat, 23 Jan 2016 11:24:34 -0500

# by mkconf $Id$

 

-------------------

 

I'm not sure how to interpret this data and what to do to get the array functioning in Linux without losing its functionality in Windows.

 

It looks like I have an imsm container in existence with UUID:0e6a3741:f6666efc:3b3f793a:1b7fa58a and there is one member identified as UUID=e1fb2dad:506bcbfb:18aee43c:7e59c40f.  I'm not sure why there are not 2 members in the array each with 2 different uuid's.  Perhaps that is my problem, but both mdadm --examine on dev/sdc and dev/sdd yielded the same UUID.  Slightly confused.

 

Any suggestions on what I should do to get this working in linux without borking it in Windows? Some mdadm assemble commands or create perhaps? Thanks.

 

Derek


Viewing all articles
Browse latest Browse all 3841

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>