• Tidak ada hasil yang ditemukan

Intel® Rapid Storage Technology enterprise (Intel® RSTe) for Linux OS

N/A
N/A
Protected

Academic year: 2019

Membagikan "Intel® Rapid Storage Technology enterprise (Intel® RSTe) for Linux OS"

Copied!
74
0
0

Teks penuh

(1)

Intel® Rapid Storage Technology

enterprise (Intel® RSTe) for

Linux OS

Software User’s Guide

June 2012

(2)

I NFORMATI ON I N THI S DOCUMENT I S PROVI DED I N CONNECTI ON WI TH I NTEL® PRODUCTS. NO LI CENSE, EXPRESS OR I MPLI ED, BY ESTOPPEL OR OTHERWI SE, TO ANY I NTELLECTUAL PROPERTY RI GHTS I S GRANTED BY THI S DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY

WHATSOEVER, AND I NTEL DI SCLAI MS ANY EXPRESS OR I MPLI ED WARRANTY, RELATI NG TO SALE AND/ OR USE OF I NTEL PRODUCTS I NCLUDI NG LI ABI LI TY OR WARRANTI ES RELATI NG TO FI TNESS FOR A PARTI CULAR PURPOSE, MERCHANTABI LI TY, OR I NFRI NGEMENT OF ANY PATENT, COPYRI GHT OR OTHER I NTELLECTUAL PROPERTY RI GHT. I nt el product s are not int ended for use in m edical, life saving, or life sust aining applicat ions.

I nt el m ay m ake changes t o specificat ions and product descript ions at any t im e, wit hout not ice.

Designers m ust not rely on t he absence or charact erist ics of any feat ures or inst ruct ions m arked " reserved" or " undefined." I nt el reserves t hese for fut ure definit ion and shall have no responsibilit y what soever for conflict s or incom pat ibilit ies arising from fut ure changes t o t hem .

The I nt el® Mat rix St orage Manager m ay cont ain design defect s or errors known as errat a which m ay cause t he product t o deviat e from published specificat ions. Current charact erized errat a are available on request .

Cont act your local I nt el sales office or your dist ribut or t o obt ain t he lat est specificat ions and before placing your product order.

I nt el, I nt el® Mat rix St orage Manager, I nt el® Mat rix St orage Technology, I nt el® Rapid Recover Technology, and t he I nt el logo are t radem arks or regist ered t radem arks of I nt el Corporat ion or it s subsidiaries in t he Unit ed St at es and ot her count ries.

* Ot her nam es and brands m ay be claim ed as t he propert y of ot hers.

(3)

Contents

2.7 Advanced Host Cont roller I nt erface ... 16

2.7.1 Nat ive Com m and Queuing ... 16

2.7.2 Hot - Plug ... 16

2.8 SAS Cont roller Unit ... 17

2.8.1 SCU OEM Param et ers ... 17

2.8.2 Linux libsas Sysfs Com ponent s ... 18

3 RAI D BI OS / EFI Configurat ion ... 25

4.3 Version I dent ificat ion ... 26

4.4 RAI D Volum e Creat ion ... 27

5.5 Creat ing RAI D Configurat ion File ... 34

(4)

6.10 Freezing Reshape ... 46

7 Online Capacit y Expansion ... 47

8 RAI D Monit oring ... 48

8.1 m dm on ... 48

8.2 Monit oring Using m dadm ... 49

8.3 Configurat ion File for Monit oring ... 51

8.4 Exam ples of m onit ored event s in syslog ... 52

9 Recovery of RAI D Volum es ... 53

9.1 Rem oving Failed Disk( s) ... 53

9.2 Rebuilding ... 54

9.3 Aut o Rebuild ... 55

10 SGPI O ... 60

10.1 SGPI O Ut ilit y ... 60

10.2 Ledct l Ut ilit y ... 61

10.3 Ledm on Serv ice ... 63

11 SAS Managem ent Prot ocol Ut ilit ies ... 64

11.1 sm p_discover ... 64

11.1.1 Exam ples ... 64

11.2 sm p_phy_cont rol ... 68

11.2.1 Exam ples ... 68

11.3 sm p_rep_m anufact urer ... 68

11.3.1 Exam ples ... 68

11.4 sm p_rep_general ... 69

11.4.1 Exam ples ... 69

(5)

Figures

Figure 1. Mat rix RAI D ... 15

Figure 2. User Prom pt ... 26

Tables

Table 1. RAI D 0 Overview ... 11

Table 2. RAI D 1 Overview ... 12

Table 3. RAI D 5 Overview ... 13

Table 4. RAI D 10 Overview ... 14

Table 5 m dadm m onit or Param et ers ... 49

Table 6 Monit oring Event s ... 50

Table 7 SGPI O Ut ilit y Opt ions ... 60

(6)

Revision History

Document

Number Revision Number Description Revision Date

327602 001 I nit ial Developer Release. June 2012

(7)

1

Introduction

The purpose of t his docum ent is t o enable a user t o properly set up and configure a syst em using t he Linux MDADM applicat ion for I nt el Mat rix St orage. I t provides st eps for set up and configurat ion, as w ell as a brief overview on Linux MDADM feat ures.

Note: The inform at ion in t his docum ent is only relevant on syst em s w it h a support ed I nt el chipset t hat include a support ed I nt el chipset , wit h a support ed operat ing syst em . Support ed I nt el chipset s -

ht t p: / / support .int el.com / support / chipset s/ I MSM/ sb/ CS- 020644.ht m

Support ed operat ing syst em s -

ht t p: / / support .int el.com / support / chipset s/ I MSM/ sb/ CS- 020648.ht m

Note: The m aj orit y of t he inform at ion in t his docum ent is relat ed t o eit her soft w are

configurat ion or hardw are int egrat ion. I nt el is not responsible for t he soft ware w rit t en by t hird part y vendors or t he im plem ent at ion of I nt el com ponent s in t he product s of t hird part y m anufact urers.

(8)

1.1

Terminology

Term Description

(9)

Term Description creat ing dat a r edundancy and incr easing fault t olerance.

RAI D 5 ( st riping w it h par it y )

The dat a in t he RAI D volum e and parit y are st riped across t he ar ray's m em ber s. Par it y infor m at ion is w r it t en w it h t he dat a in a r ot at ing sequence across t he m em ber s of t he ar ray . This RAI D level is a preferred configurat ion for efficiency, fault - t olerance, and perform ance.

RAI D Array A logical gr ouping of physical hard drives.

RAI D Volum e A fixed am ount of space across a RAI D ar ray t hat appears as a single

Recovery Volum e A volum e ut ilizing I nt el( R) Rapid Recover Technology.

Kilobyt e Unit m ount for 1024 byt es or 210 byt es

Megabyt e Unit am ount for 220 byt es

(10)

1.2

Reference Documents

Document Document

No./Location

m dadm m anpages Linux

m anpages

Ledm on m anpages Linux

m anpages

(11)

2

Intel® Matrix Storage Manager

Features

2.1

Feature Overview

The I nt el® Mat rix St orage Manager soft w are package provides high- perform ance Serial ATA and Serial ATA RAI D capabilit ies for support ed operat ing syst em s. Support ed operat ing syst em s -

 I nt el® Rapid Recover Technology

 Advanced Host Cont roller I nt erface ( AHCI ) support  SAS Cont roller Unit ( SCU) support

2.2

RAID 0 (Striping)

RAI D 0 uses t he read/ writ e capabilit ies of t w o or m ore hard drives working in parallel t o m axim ize t he st orage perform ance of a com put er syst em .

Table 1 provides an overview of t he advant ages, t he level of fault - t olerance provided, and t he t ypical usage of RAI D 0.

Application: Typically used in deskt ops and w or kst at ions for m axim um perform ance for t em porar y dat a and high I / O rat e. 2 - drive RAI D 0 available in specific m obile configurat ions.

(12)

2.3

RAID 1 (Mirroring)

A RAI D 1 array cont ains t w o hard drives w here t he dat a bet w een t he t w o is m irrored in real t im e t o provide good dat a reliabilit y in t he case of a single disk failure; w hen one disk drive fails, all dat a is im m ediat ely available on t he ot her w it hout any im pact t o t he int egrit y of t he dat a.

Table 2 provides an overview of t he advant ages, t he level of fault - t olerance provided, and t he t ypical usage of RAI D 1.

Table 2. RAID 1 Overview

Hard Drives Required:

2

Advantage: 100% redundancy of dat a. One disk m ay fail, but dat a w ill cont inue t o be accessible. A r ebuild t o a new disk is recom m ended t o m aint ain dat a redundancy .

Fault-tolerance:

Excellent – disk m irror ing m eans t hat all dat a on one disk is duplicat ed on anot her disk.

Application: Typically used for sm aller syst em s w here capacit y of one disk is sufficient and for any applicat ion( s) requiring very high availabilit y. Available in specific m obile configurat ions.

Refer t o t he following w eb sit e for m ore inform at ion on RAI D 1:

(13)

2.4

RAID 5 (Striping with Parity)

A RAI D 5 array cont ains t hree or m ore hard drives where t he dat a and parit y are st riped across all t he hard drives in t he array. Parit y is a m at hem at ical m et hod for recreat ing dat a t hat w as lost from a single drive, which increases fault - t olerance. I f t here are N disks in t he RAI D 5 volum e, t he capacit y for dat a w ould be N – 1 disks. For exam ple, if t he RAI D 5 volum e has 5 disks, t he dat a capacit y for t his RAI D volum e consist s of four disks.

Linux MDRAI D support s four t ypes of parit y layout . How ever, I nt el I MSM only support s t he left - asym m et ric parit y layout .

Table 3 provides an overview of t he advant ages, t he level of fault - t olerance provided, and t he t ypical usage of RAI D 5.

Table 3. RAID 5 Overview

Hard Drives Required:

3- 6

Advantage: Higher percent age of usable capacit y and high read perfor m ance as w ell as fault - t olerance.

Fault-tolerance:

Excellent - par it y infor m at ion allow s dat a t o be rebuilt aft er replacing a failed hard drive w it h a new drive.

Application: St orage of large am ount s of crit ical dat a. Not available in m obile configurat ions.

(14)

2.5

RAID 10

A RAI D 10 array uses four hard drives t o creat e a com binat ion of RAI D levels 0 and 1. I t is a st riped set whose m em bers are each a m irrored set .

Table 4 provides an overview of t he advant ages, t he level of fault - t olerance provided, and t he t ypical usage of RAI D 10.

Table 4. RAID 10 Overview

Hard Drives Required:

4

Advantage: Com bines t he read perfor m ance of RAI D 0 w it h t he fault - t olerance of RAI D 1.

Fault-tolerance:

Excellent – disk m irror ing m eans t hat all dat a on one disk is duplicat ed on anot her disk.

Application: High- perfor m ance applicat ions requiring dat a pr ot ect ion, such as video edit ing. Not av ailable in m obile configurat ions.

(15)

2.6

Matrix RAID

Mat rix RAI D allow s you t o creat e t w o RAI D volum es on a single RAI D array. As an exam ple, on a syst em w it h an I nt el® 82801GR I / O cont roller hub ( I CH7R) , I nt el® Mat rix St orage Manager allow s you t o creat e bot h a RAI D 0 volum e as w ell as a RAI D 5 volum e across four Serial ATA hard drives. An im port ant requirem ent t he Mat rix RAI D has is t hat in a Mat rix RAI D cont ainer, t he volum es inside t he cont ainer m ust span t he sam e set of m em ber disks. Refer t o Figure 1.

Figure 1. Matrix RAID

(16)

2.7

Advanced Host Controller Interface

(17)

2.8

SAS Controller Unit

SCU is t he I nt el® Serial At t ached SCSI Cont roller Unit t hat is part of t he C600 fam ily Plat form Cont roller Hub. The Linux SCU driver ( isci) has been upst ream ed t o t he Linux kernel since kernel version v3.0. How ever, t he lat est Linux kernel is alw ays

recom m ended t o get t he lat est bug fixes and feat ur es.

2.8.1

SCU OEM Parameters

The SCU driver requires proper OEM param et ers t o be loaded in order t o set t he correct PHY set t ings. The appropriat e OEM param et ers shall be loaded from t he plat form eit her from t he OROM region if boot ing legacy or via EFI variable m echanism if boot ing EFI . Below is an exam ple of what you m ay see from t he isci driver load. The correct driver m essage displayed should be t hat t he OEM param et er is loaded from “platform”. This indicates the driver has found good OEM parameter from the OROM or EFI .

isci: Intel(R) C600 SAS Controller Driver - version 1.1.0

isci 0000:03:00.0: driver configured for rev: 5 silicon

isci 0000:03:00.0: OEM parameter table found in OROM

isci 0000:03:00.0: OEM SAS parameters (version: 1.1) loaded (platform)

isci 0000:03:00.0: SCU controller 0: phy 3-0 cables: {short, short,

short, short}

scsi6 : isci

isci 0000:03:00.0: SCU controller 1: phy 3-0 cables: {short, short,

(18)

2.8.2

Linux libsas Sysfs Components

Linux provides driver inform at ion t hrough sysfs, a virt ual file syst em . The exam ple below provides som e inform at ion on som e of t he libsas relat ed com ponent s t hat can be useful or inform at ional. The sas relat ed ent ries can be found in / sys/ class sysfs direct ory.

../ ../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ ho st 6/ sas_host / host 6

lrwxrwxrwx 1 root root 0 May 18 13: 45 host 7 - >

../ ../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ ho st 7/ sas_host / host 7

(19)

Below shows t he devices at t ached t o expander-6:0:

ls -1 /sys/class/sas_end_device/ | grep end_device-6

end_device- 6: 0: 10 end_device- 6: 0: 11 end_device- 6: 0: 12 end_device- 6: 0: 13 end_device- 6: 0: 14 end_device- 6: 0: 15 end_device- 6: 0: 24 end_device- 6: 0: 4 end_device- 6: 0: 5 end_device- 6: 0: 6 end_device- 6: 0: 7 end_device- 6: 0: 8 end_device- 6: 0: 9

The exam ple above show s t hat t he first four PHYs in t he expander are m issing , and 24t h phy is an ext ra virt ual phy t hat is used by t he expander int ernally .

(20)
(21)
(22)
(23)

The Linux disk nam e can be found a few levels deeper:

(24)

There are 4 phys and 4 narrow port s, and t his m eans t he 4 end devices are connect ed direct ly t o t he HBA. This can be show n:

ls -l /sys/class/sas_end_device/ | grep end_device-6

lrwxrwxrwx 1 root root 0 May 18 09: 16 end_device- 6: 0 - >

../ ../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ ho st 6/ port - 6: 0/ end_device- 6: 0/ sas_end_device/ end_device- 6: 0

lrwxrwxrwx 1 root root 0 May 18 09: 16 end_device- 6: 1 - >

../ ../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ ho st 6/ port - 6: 1/ end_device- 6: 1/ sas_end_device/ end_device- 6: 1

lrwxrwxrwx 1 root root 0 May 18 09: 16 end_device- 6: 2 - >

../ ../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ ho st 6/ port - 6: 2/ end_device- 6: 2/ sas_end_device/ end_device- 6: 2

lrwxrwxrwx 1 root root 0 May 18 09: 16 end_device- 6: 3 - >

../ ../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ ho st 6/ port - 6: 3/ end_device- 6: 3/ sas_end_device/ end_device- 6: 3

Or by:

ls -l /sys/block/ | grep end_device-6

lrwxrwxrwx 1 root root 0 May 18 09: 09 sdb - >

../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ host 6/ port - 6: 0/end_device-6:0/ t arget 6: 0: 4/ 6: 0: 4: 0/ block/ sdb

lrwxrwxrwx 1 root root 0 May 18 09: 09 sdc - >

../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ host 6/ port - 6: 1/end_device-6:1/ t arget 6: 0: 1/ 6: 0: 1: 0/ block/ sdc

lrwxrwxrwx 1 root root 0 May 18 09: 09 sdd - >

../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ host 6/ port - 6: 2/end_device-6:2/ t arget 6: 0: 5/ 6: 0: 5: 0/ block/ sdd

lrwxrwxrwx 1 root root 0 May 18 09: 09 sde - >

../ devices/ pci0000: 00/ 0000: 00: 01.0/ 0000: 01: 00.0/ 0000: 02: 08.0/ 0000: 03: 00.0/ host 6/ port - 6: 3/end_device-6:3/ t arget 6: 0: 3/ 6: 0: 3: 0/ block/ sde

(25)

3

RAID BIOS / EFI Configuration

3.1

Overview

To inst all t he I nt el® Mat rix St orage Manager, t he syst em BI OS m ust include t he I nt el® Mat rix St orage Manager opt ion ROM or EFI driver. The I nt el® Mat rix St orage Manager opt ion ROM / EFI driver is t ied t o t he cont roller hub. For det ailed

docum ent at ion please see t he I nt el® Rapid St orage Technology Ent erprise ( I nt el®

RSTe) Software User’s Guide.

3.2

Enabling RAID in BIOS

(26)

4

Intel® Matrix Storage Manager

Option ROM

4.1

Overview

The I nt el® Mat rix St orage Manager opt ional ROM is a PnP opt ion ROM t hat provides a pre- operat ing syst em user int erface for RAI D configurat ions. I t also provides BI OS and DOS disk services ( I nt 13h) .

4.2

User Interface

To ent er t he I nt el® Mat rix St orage Manager opt ional ROM user int erface, press t he < Ct rl> and < i> keys sim ult aneously w hen prom pt ed during t he Pow er- On Self Test ( POST) . Refer t o Figure 2.

Figure 2. User Prompt

NOTE: The hard dr ive( s) and hard dr ive infor m at ion list ed for y our syst em can differ from t he follow ing exam ple.

4.3

Version Identification

(27)

4.4

RAID Volume Creation

Use t he follow ing st eps t o creat e a RAI D volum e using t he I nt el® Mat rix St orage Manager user int erface:

Note: The following procedure should only be used w it h a newly - built syst em or if you are reinst alling your operat ing syst em . The following procedure should not be used t o m igrat e an exist ing syst em t o RAI D 0. I f you wish t o creat e m at rix RAI D volum es aft er t he operat ing syst em soft w are is loaded, t hey should be creat ed using t he MDADM t ool in t he Linux dist ribut ion.

1. Press t he < Ct rl> and < i> keys sim ult aneously w hen t he following w indow appears during POST:

(28)

3. Type in a volum e nam e and press t he < Ent er> key, or press t he < Ent er> key t o accept t he default nam e.

(29)

5. Press t he < Ent er> key t o select t he physical disks. A dialog sim ilar t o t he follow ing will appear:

(30)

7. Unless you have select ed RAI D 1, select t he st rip size by using t he <> or <> keys t o scroll t hrough t he available values, t hen press t he < Ent er> key.

(31)

9. At t he Creat e Volum e prom pt , press t he < Ent er> key t o creat e t he volum e. The following prom pt w ill appear:

10. Press t he < Y> key t o confirm volum e creat ion.

11. To exit t he opt ion ROM user int erface, select opt ion 5. Exit and press t he < Ent er> key.

12. Press t he < Y> key again t o confirm exit .

(32)

5

Volume Creation

hard drives. Back up all im port ant dat a before beginning t hese st eps.

Below is an exam ple t o creat e a RAI D5 volum e w it h 6 disks: specify t he range of disks. Alt hough individual disks can be used t o list out all t he disks. i.e. /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

2. Next a RAI D 5 volum e is creat ed.

mdadm -C /dev/md/Volume0 /dev/md0 –n 6 –l 5

The com m and creat es a RAI D 5 volum e / dev/ m d/ Volum e0 wit hin t he /dev/md0 cont ainer.

(33)

5.2

Filesystem Creation on RAID Volume

Aft er t he RAI D volum e has been creat ed, a filesyst em can be creat ed in order t o allow t he m ount ing of t he RAI D volum e.

mkfs.ext4 /dev/md/Volume0

Once t he filesyst em has been creat ed, it can be m ount ed:

mount /dev/md/Volume0 /mnt/myraidvolume

5.3

RAID Volume Creation Examples

To creat e a RAI D 0 volum e, use t he following exam ple:

mdadm –C /dev/md/Volume0 /dev/md0 –n 2 –l 0

To creat e a RAI D 1 volum e, use t he following exam ple:

mdadm –C /dev/md/Volume0 /dev/md0 –n 2 –l 1

To creat e a RAI D 5 volum e, use t he following exam ple:

mdadm –C /dev/md/Volume0 /dev/md0 –n 3 –l 5

To creat e a RAI D 10 volum e, use t he following exam ple:

mdadm –C /dev/md/Volume0 /dev/md0 –n 4 –l 10

Note: To creat e m ult iple RAI D volum es in t he sam e cont ainer, t hey MUST span equal

(34)

5.4

Adding a Spare Disk

Adding a spare disk allows im m ediat e reconst ruct ion of t he RAI D volum e w hen a device failure is det ect ed. Mdraid will mark the failed device as “bad” and start reconst ruct ion w it h t he first available spare disk. The spare disk can also be used t o grow t he RAI D volum e. The spare disks sit idle during norm al operat ions. When using m dadm wit h I MSM m et a dat a, t he spare disk added t o a cont ainer is dedicat ed t o t hat specific cont ainer. The follow ing com m and adds a spare disk t o t he designat ed

cont ainer.

mdadm -a /dev/md0 /dev/sde

5.5

Creating RAID Configuration File

A configurat ion file can be creat ed t o r ecord t he exist ing RAI D volum es. The inform at ion can be ext ract ed from t he exist ing RAI D set up. The configurat ion file is t ypically st ored at t he default locat ion of / et c/ m dadm .conf. This allow s a consist ent assem ble of t he appropriat e RAI D volum es.

mdadm -E –s –-config=mdadm.conf > /etc/mdadm.conf

5.6

RAID Volume Initialization / Resync

I m m ediat ely aft er a RAI D volum e has been creat ed, init ializat ion ( or resync)

(35)

6

Volume Operations

m dadm provides various opt ions t o assem ble, m onit or, exam ine, or st op RAI D volum es.

6.1

Erasing RAID Metadata

Having incorrect and bad RAI D m et adat a can cause RAI D volum es t o be assem bled incorrect ly. The m et adat a can be erased wit h t he following com m and t o m ake sure t he disk is clean. This operat ion does not at t em pt t o w ipe exist ing user dat a.

mdadm --zero-superblock /dev/sdb

Mult iple disks can be specified t o clear t he superblock at t he sam e t im e.

6.2

Volume Assemble

RAI D volum es can be creat ed via OROM user int erface or m dadm . I nact ive RAI D volum es t hat are creat ed can be act ivat ed using t he assem ble opt ion wit h m dadm . The following com m and scans for t he m dadm configurat ion file at /etc/mdadm.conf in order t o assem ble t he RAI D volum es. I f t he configurat ion file is not found, it scans all available disks for RAI D m em ber disks and assem bles all t he RAI D volum es:

mdadm –A –s

To m anually assem ble and act ivat e RAI D volum es w it hout t he configurat ion file, t he following exam ple can be used:

mdadm –A /dev/md0 –e imsm /dev/sda /dev/sdb /dev/sdc /dev/sdd mdadm –I /dev/md0

(36)

6.3

Stopping the Volumes

To st op all act ive RAI D volum es, t he following com m and can be used. Mdadm w ill scan for and st op all running RAI D volum es and cont ainers.

mdadm –S –s

How ever, RAI D volum e nam es can be specified t o st op t he volum e direct ly.

mdadm –S /dev/md/Volume0

And t o st op a cont ainer, t he following com m and can be used.

mdadm –S /dev/md0

6.4

Reporting RAID Information

Use t he follow ing com m and, t o print out det ails about a RAI D cont ainer or volum e:

mdadm –D /dev/md0

/dev/md0:

Version : imsm

Raid Level : container

Total Devices : 5

Working Devices : 5

UUID : b559b502:b199f86f:ee9fbd40:cd10e91d

Member Arrays :

Number Major Minor RaidDevice

0 8 32 - /dev/sdc

1 8 48 - /dev/sdd

2 8 80 - /dev/sdf

(37)

To display det ails about a RAI D volum e:

mdadm –D /dev/md/Volume0

/ dev/ m d/ Volum e0:

Container : /dev/md0, member 0

Raid Level : raid5

Array Size : 39999488 (38.15 GiB 40.96 GB)

Used Dev Size : 9999872 (9.54 GiB 10.24 GB)

Raid Devices : 5

Total Devices : 5

Update Time : Thu Jun 17 07:40:23 2010

State : clean

Active Devices : 5

Working Devices : 5

Failed Devices : 0

Spare Devices : 0

Layout : left-asymmetric

Chunk Size : 128K

UUID : 084d2b20:09897744:36757c5b:77e0e945

Number Major Minor RaidDevice State

4 8 96 0 active sync /dev/sdg

3 8 48 1 active sync /dev/sdd

2 8 32 2 active sync /dev/sdc

1 8 16 3 active sync /dev/sdb

(38)
(39)

Chunk Size : 128 KiB

Reserved : 0

Migrate State : idle

Map State : normal

Dirty State : clean

Disk00 Serial : 9QMCLYES

State : active

Id : 00000000

Usable Size : 976768654 (465.76 GiB 500.11 GB)

Disk01 Serial : 9QMCLYB9

State : active

Id : 00000000

Usable Size : 976768654 (465.76 GiB 500.11 GB)

Disk03 Serial : 9QMCM7XY

State : active

Id : 00000000

Usable Size : 976768654 (465.76 GiB 500.11 GB)

Disk04 Serial : 9QMCF38Z

State : active

Id : 00000000

(40)

To get t he m ost current st at us on all RAI D volum es, t he file /proc/mdstat can be exam ined. This file is a special file t hat is updat ed cont inuously t o show t he st at us of all t he cont ainers, and RAI D volum es. I n t he exam ple below , t he st at us show s t hat current ly available RAI D support s are level 4, 5, and 6. m d126 is t he act ive RAI D volum e wit h RAI D level 5 and 128k st ripe size. The RAI D volum e cont ains 5 disks t hat are all in norm al ( UP) st at us. m d127 is t he I MSM cont ainer for t he RAI D volum e.

cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

md127 : active raid5 sdg[4] sdd[3] sdc[2] sdb[1] sdf[0]

39999488 blocks super external:/md0/0 level 5, 128k chunk, algorithm 0 [5/5] [UUUUU]

md0 : inactive sdb[4](S) sdg[3](S) sdf[2](S) sdd[1](S) sdc[0](S)

11285 blocks super external:imsm

unused devices: <none>

Note: When creat ing cont ainers and volum es, one will not ice t hat in /proc/mdstat t he device nam es will not m at ch up. For exam ple, w hen / dev/ m d/ Volum e0 is creat ed, m d127 w ill be show n in /proc/mdstat and ot her det ail out put as w ell. The

/dev/md/Volume0 is creat ed as an alias of /dev/md127 device node. Looking in t he

(41)

6.5

To Fail an Active Drive

I n order t o m ark an act ive drive as a failed drive ( or set as fault y) m anually, t he following com m and can t o be issued:

mdadm –f /dev/md/Volume0 /dev/sdb

6.6

Remove a Failed Drive

To rem ove a failed drive, t he following com m and needs t o be ex ecut ed. This only works on a cont ainer based RAI D volum e.

mdadm –r /dev/md0 /dev/sdb

6.7

Report RAID Details from BIOS

To see w hat I nt el® RAI D support is provided by t he BI OS issue t he com m and:

mdadm -–detail-platform

Platform : Intel(R) Matrix Storage Manager Version : 8.9.0.1023

RAID Levels : raid0 raid1 raid10 raid5

Chunk Sizes : 4k 8k 16k 32k 64k 128k

Max Disks : 6

Max Volumes : 2

I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2

Port0 : /dev/sda (3MT0585Z)

Port1 : - non-disk device (ATAPI DVD D DH16D4S) -

Port2 : /dev/sdb (WD-WCANK2850263)

Port3 : /dev/sdc (3MT005ML)

Port4 : /dev/sdd (WD-WCANK2850441)

Port5 : /dev/sde (WD-WCANK2852905)

(42)

6.8

Logging

Various m essages com ing from MDRAI D subsyst em in t he kernel are logged. Typically t he m essages are st ored in t he log file /var/log/messages in popular Linux

dist ribut ions w it h ot her kernel st at us, w arning, and error out put s. Below is an exam ple snippet of what t he log m ay look like:

Jun 17 06: 20: 04 t est box kernel: raid5: allocat ed 5334kB for m d126

Jun 17 06: 20: 04 t est box kernel: 0: w= 1 pa= 0 pr= 5 m = 1 a= 0 r= 5 op1= 0 op2= 0 Jun 17 06: 20: 04 t est box kernel: 1: w= 2 pa= 0 pr= 5 m = 1 a= 0 r= 5 op1= 0 op2= 0 Jun 17 06: 20: 04 t est box kernel: 2: w= 3 pa= 0 pr= 5 m = 1 a= 0 r= 5 op1= 0 op2= 0 Jun 17 06: 20: 04 t est box kernel: 3: w= 4 pa= 0 pr= 5 m = 1 a= 0 r= 5 op1= 0 op2= 0 Jun 17 06: 20: 04 t est box kernel: 4: w= 5 pa= 0 pr= 5 m = 1 a= 0 r= 5 op1= 0 op2= 0 Jun 17 06: 20: 04 t est box kernel: raid5: raid level 5 set m d126 act ive wit h 5 out of 5 devices, algorit hm 0

Jun 17 06: 20: 04 t est box kernel: RAI D5 conf print out : Jun 17 06: 20: 04 t est box kernel: - - - rd: 5 w d: 5 Jun 17 06: 20: 04 t est box kernel: disk 0, o: 1, dev: sdg Jun 17 06: 20: 04 t est box kernel: disk 1, o: 1, dev: sdd Jun 17 06: 20: 04 t est box kernel: disk 2, o: 1, dev: sdc Jun 17 06: 20: 04 t est box kernel: disk 3, o: 1, dev: sdb Jun 17 06: 20: 04 t est box kernel: disk 4, o: 1, dev: sdf

Jun 17 06: 20: 04 t est box kernel: m d127: det ect ed capacit y change from 0 t o 40959475712

Jun 17 06: 20: 04 t est box kernel: m d127: unknow n part it ion t able

(43)

6.9

Raid Level Migration

The RAI D level m igrat ion feat ure allow s changing of t he RAI D volum e level wit hout loss of dat a st ored on t he volum e. I t does not require re- inst allat ion of t he operat ing syst em . All applicat ions and dat a rem ain int act .

The following t able shows t he available m igrat ion support wit h I nt el© I MSM m et adat a. You must have the number of drives necessary for the level you’re converting to as spare drives.

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]

md127 : active raid1 sdb[1] sda[0]

102400 blocks super external:/md0/0 [2/2] [UU]

md0 : inactive sdb[1](S) sda[0](S)

2210 blocks super external:imsm

(44)

2) First st ep is t o m igrat e from RAI D 1 t o RAI D 0

mdadm –G / dev/ m d127 –l 0 cat /proc/mdstat

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]

md127 : active raid0 sdb[1]

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]

md127 : active raid0 sda[2] sdb[1]

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]

md127 : active raid0 sda[2] sdb[1]

204800 blocks super external:/md0/0 64k chunks

md0 : inactive sdc[2](S) sdb[1](S) sda[0](S)

3315 blocks super external:imsm

(45)

5) Migrat ing from RAI D 0 t o RAI D 5:

mdadm -G /dev/md127 -l 5 --layout=left-asymmetric

cat /proc/mdstat

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]

md127 : active raid5 sdc[3] sda[2] sdb[1]

204800 blocks super external:/md0/0 level 5, 64k chunk, algorithm 0 [3/3] [UUU]

md0 : inactive sdc[2](S) sdb[1](S) sda[0](S)

3315 blocks super external:imsm

unused devices: <none>

(46)

6.10

Freezing Reshape

I f a RAI D volum e is in t he process of reshape, t he r eshape process should be frozen during t he init ram fs boot ing phase and resum ed w hen t he syst em is fully up. St art ing wit h m dadm 3.2.5 t hese feat ures are support ed. Dist ribut ions from t he Operat ing Syst em Vendors should have t aken cared of t his in t heir init script set up ut ilit ies, but det ails are described below for cust om ers t hat are building t heir ow n dist ribut ion. The param et ers --freeze-reshape is used t o pause t he reshape operat ion during syst em st art up init ram fs phase. For exam ple:

mdadm –As --freeze-reshape

When reshape is frozen, t he st at us provided by / proc/ m dst at w ill denot e t he st at e wit h a hypen such as “super external:-md127/0” instead of “super

external:/md127/0”:

Personalities : [raid5]

md127 : active raid5 sda[2] sdb[1] sdc[0]

(47)

7

Online Capacity Expansion

The Online Capacit y Expansion ( OLCE) feat ure allow s t he capacit y expansion of t he RAID volumes. With the “online” feature, the operation can be performed while a filesyst em is m ount ed on t op of t he RAI D volum e. This allow s avoiding having dow n t im e from t aking t he RAI D volum e offline for service or loss of dat a.

The size of a RAI D volum e can be increased by adding addit ional disks t o t he RAI D cont ainer or ( only if it is t he last volum e in t he cont ainer) by expanding it on exist ing unused disk space available t o t he RAI D volum e. I n t he first case if t w o volum es exist in t he sam e cont ainer, OLCE is perform ed aut om at ically on bot h volum es ( one by one) because of t he requirem ent t hat all volum es m ust span t he sam e set of disks for I MSM.

The following com m ands can be issued t o grow t he RAI D volum e. The first assum es t hat it is t he last volum e in t he cont ainer and w e have addit ional room t o grow , and t he second assum es t hat an addit ional disk has been added t o t he I MSM cont ainer.

1)

If there is additional room in the last volume of the container, the volume can

be grown to the maximum available capacity.

This feature is only available

starting with mdadm v3.2.5

:

mdadm –G /dev/md/Volume0 –-size=max

2)

The example below adds a single disk to the RAID container and then grows the

volume(s). Because IMSM volumes inside a container must span the same

number of disks, all volumes are expanded. A backup file that MDRAID will store

the backup superblock is specified. This file must not reside on any of the active

RAID volumes that are being worked on.

mdadm –a /dev/md0 /dev/sde

(48)

8

RAID Monitoring

There are t w o com ponent s wit hin t he m dadm t ools t o m onit or event s for t he RAI D volum es. Mdadm can be used t o m onit or general RAI D event s, and m dm on provides the ability to monitor “metadata event” occurrences such as disk failures, clean- t o-dirt y t ransit ions, and et c for ext ernal m et adat a based RAI D volum es. The kernel provides t he abilit y t o report such act ions t o t he userspace via sysfs, and m dm on t akes act ion accordingly w it h t he m onit oring capabilit y. The m dm on polls t he sysfs looking for changes in t he ent ries array_state, sync_action, and per disk state

at t ribut e files.

8.1

mdmon

The m dadm m onit or, m dm on, is aut om at ically st art ed w hen MDRAI D volum es are act ivat ed by m dadm t hrough creat ion or assem ble. How ever, t he daem on can be st art ed m anually:

mdmon /dev/md0

The - - all param et er can be used in place of t he cont ainer nam e t o st ar m onit ors for all act ive cont ainers.

Mdm on m ust be st art ed in t he init ram fs in order t o support an ex t ernal m et adat a RAI D array as t he root filesyst em . Mdm on needs t o be rest art ed in t he new nam espace once t he final root filesyst em has been m ount ed.

(49)

8.2

Monitoring Using mdadm

Mdadm m onit oring can be st art ed w it h t he following com m and line:

mdadm --monitor –-scan --daemonise –-syslog

The com m and above runs m dadm as a daem on t o m onit or all m d devices. All event s will be report ed t o syslog. The user can m onit or t he syslog and filt er for specific m dadm event s generat ed.

There are addit ional com m and line param et ers t hat can be passed t o m dm on at st art up.

Table 5 mdadm monitor Parameters

Long form Short form Description

- - m ail - m Provide m ail address t o em ail alert s or failures t o. messages have facility of ’daemon’ and varying priorit ies.

- - increm ent - r Give a percent age increm ent . m dadm w ill generat e RebuildNN event s w it h t he given percent age

increm ent .

- - daem onise - f Run as background daem on if m onit oring.

- - pid- file - i Writ e t he pid of t he daem on process t o specified file. - - no- sharing N/ A This inhibit s t he funct ionalit y for m oving spares

(50)

The following t able present s all t he event s t hat are report ed by m dadm m onit or:

Table 6 Monitoring Events

Event Name Description

DeviceDisappeared An MD array previously configured no longer exist s.

RebuildSt art ed An MD array st art ed reconst ruct ion.

RebuildNN NN is a 2 digit num ber t hat indicat es rebuild has passed t hat m any percent of t he t ot al. For exam ple, Rebuild50 w ill t rigger an event w hen 50% of rebuild has com plet ed.

RebuildFinished An MD array has com plet ed rebuild.

Fail1 An act ive com ponent of an array has been m arked fault y. FailSpare1 A spare device t hat was being rebuilt t o replace a fault y device

has failed.

SpareAct ive A spare device t hat was being rebuilt t o replace a fault device is rebuilt and act ive.

NewArrary A new MD array has been det ect ed in / proc/ m dst at . DegradedArray1 A newly discovered array appear s t o be degraded.

MoveSpare A spare drive has been m oved from one array in a spare group t o anot her array t o replace a failed disk. Bot h arrays are labeled w it h t he sam e spar e group.

SparesMissing1 The spare device( s) does not exist in com parison t o t he config file when t he MD array is first discovered.

(51)

8.3

Configuration File for Monitoring

Mdadm will check t he m dadm .conf config file t o ext ract t he appropriat e ent ries for m onit oring. The following ent ries w e can set t o pass t o m dm on:

MAI LADDR: This config ent ry allow s an E- m ail address t o be used for alert s. Only one em ail address should be used.

MAI LFROM: This config ent ry set s t he em ail address t o appear from t he alert em ails. The default from would be the “root” user with no domain. This entry overrides t he default .

PROGRAM: This config ent ry set s t he program t o r un w hen m dm on det ect s

(52)

8.4

Examples of monitored events in syslog

I n t his exam ple we have a RAI D5 volum e: Personalities : [raid5]

md127 : active raid5 sdd[2] sdc[1] sdb[0]

204800 blocks super external:/md0/0 level 5, 128k chunk, algorithm 0 [3/3] [UUU] following m essages can be found in / var/ log/ m essages or t he corresponding syslog file t he dist ribut ion has designat ed:

May 15 09:58:40 myhost mdadm[9863]: NewArray event detected on md device /dev/md127

May 15 09:58:40 myhost mdadm[9863]: NewArray event detected on md device /dev/md0

When a spar e disk has been added:

May 15 09:59:07 myhost mdadm[9863]: SpareActive event detected on md device /dev/md127, component device /dev/sde

When an OLCE com m and is finished:

May 15 09:59:16 myhost mdadm[9863]: RebuildFinished event detected on md device /dev/md127

When a disk fails:

May 15 10:01:04 myhost mdadm[9863]: Fail event detected on md device /dev/md127, component device /dev/sdb

When a rebuild finishes:

May 15 10:02:22 myhost mdadm[9863]: RebuildFinished event detected on md device /dev/md127

May 15 10:02:22 myhost mdadm[9863]: SpareActive event detected on md device /dev/md127

When all MD devices are st opped:

May 15 10:03:27 myhost mdadm[9863]: DeviceDisappeared event detected on md device /dev/md127

(53)

9

Recovery of RAID Volumes

Recovery is one of t he m ost im port ant aspect s of using RAI D. I t allow s rebuilding of RAI D volum es on a syst em when disk failure occurs w it hout t he loss of any dat a. Recovery is only possible in t he case of following RAI D levels: 1, 5, and 10. General recovery is possible if no m ore t han one disk fails. How ever in t he case of RAI D 10,

md127 : active (read-only) raid5 sde[4] sdd[3] sdc[2] sdb[1] sdf[0]

39999488 blocks super external:/md0/0 level 5, 512k chunk, algorithm 0 [5/5] [UUUUU]

md0 : inactive sdb[4](S) sdf[3](S) sde[2](S) sdc[1](S) sdd[0](S)

11285 blocks super external:imsm

unused devices: <none>

When a disk fails, in t his inst ance / dev / sde, t he following is displayed in / proc/ m dst at : Personalities : [raid6] [raid5] [raid4]

md127 : active raid5 sdf[4] sdb[3] sdc[2] sdd[1]

39999488 blocks super external:/md0/0 level 5, 512k chunk, algorithm 0 [5/4] [_UUUU]

md0 : inactive sdf[4](S) sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)

(54)

9.2

Rebuilding

At t his point , t his RAI D volum e is running in degraded m ode. How ever, it is st ill operat ional. I f t here are spares disks available in t he cont ainer, rebuild of t he RAI D volum e would aut om at ically com m ence. A spare can also be m anually added t o st art t he rebuild process:

mdadm –add /dev/md0 /dev/sdg

Personalities : [raid6] [raid5] [raid4]

md127 : active raid5 sdg[5] sdf[4] sdb[3] sdc[2] sdd[1]

39999488 blocks super external:/md0/0 level 5, 512k chunk, algorithm 0 [5/4] [_UUUU]

[==>...] recovery = 11.5% (1154588/9999872) finish=2.6min speed=54980K/sec

md0 : inactive sdg[5](S) sdf[4](S) sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)

1254 blocks super external:imsm

(55)

9.3

Auto Rebuild

Aut o- rebuild allow s a RAI D volum e t o be aut om at ically rebuilt when a disk fails. There are 3 different scenarios t his can happen:

1.

There is a rebuild capable RAID volume with no spare disk in the

container. If one of the disks in the volume fails it enters degraded mode.

When a spare disk is added manually to the container, rebuild starts

automatically (as referenced in section 9.2 Rebuilding).

2.

There is a rebuild capable RAID volume with at least one spare disk in the

container. If one of the disks in the volume fails, the spare disk is

automatically pulled in, and the rebuild starts.

3.

There are two containers . One container has a spare disk and the other

one does not. If mdadm is running in monitor mode, and the appropriate

policy is configured in the mdadm.conf file, a spare disk will be moved

automatically from one container

to the other if there’s a RAID volume

failure that requires a spare disk for rebuild.

For scenario num ber t hree, an exam ple is present ed below :

1.

Create container

”md1”

with 3 disks:

mdadm –C /dev/md1 –e imsm –n3 /dev/sda /dev/sdb /dev/sdc

2.

Create RAID1 vo

lume “Volume1” in container

”md1”

, disk

/dev/sdc

remains a

spare disk:

mdadm –C /dev/md/Volume1 –l1 –n2 /dev/sda /dev/sdb

3.

Create container

“md2”

with 2 disks:

(56)

4.

Create RAID1 volume “Volume2” in container

”md2”

, with no spare disks:

mdadm –C /dev/md/Volume2 –l1 –n2 /dev/sdd /dev/sde

Personalities : [raid5] [raid1] [raid0]

md126: active raid1 sde[1] sdd[0]

1048576 blocks super external:/md2/0 [2/2] [UU]

md2 : inactive sde[1](S) sdd[0](S)

2210 blocks super external:imsm

md127: active raid1 sdb[1] sda[0]

1048576 blocks super external:/md1/0 [2/2] [UU]

md1 : inactive sdc[2](S) sdb[1](S) sda[0](S)

3315 blocks super external:imsm

(57)

5.

Save configuration file:

mdadm –Ebs > /etc/mdadm.conf

6.

Add the policy with the same domain and the same action for all disks to the

configuration file, which allows the spare to move from one container to another

for rebuild:

echo "POLICY domain=DOMAIN path=* metadata=imsm action=spare-same-slot" >> /etc/mdadm.conf

The configurat ion file in / et c/ m dadm .conf m ay look like below :

ARRAY metadata=imsm UUID=67563d6a:3d253ad0:6e649d99:01794f88 spares=1

ARRAY /dev/md/Volume2 container=67563d6a:3d253ad0:6e649d99:01794f88 member=0 UUID=76e507f1:fadb9a42:da46d784:2e2166e8

ARRAY metadata=imsm UUID=267445e7:458c89eb:bd5176ce:c37281b7

ARRAY /dev/md/Volume1 container=267445e7:458c89eb:bd5176ce:c37281b7 member=0 UUID=25025077:fba9cfab:e4ad212d:3e5fce11

POLICY domain=DOMAIN path=* metadata=imsms action=spare-same-slot

7.

Make sure mdadm is in monitor mode:

(58)

8.

Fail one of the disks in volume “Volume2”, the volume without a spare:

mdadm --fail /dev/md/Volume2 /dev/sdd

The spare disk /dev/sdcshould automatically moves from the container “md1” to the container “md2” and the rebuild of “Volume2” starts automatically:

Personalities : [raid5] [raid1] [raid0]

md126: active raid1 sdc[2] sde[1]

1048576 blocks super external:/md2/0 [2/1] [_U]

[========>...] recovery = 41.2% (432896/1048576) finish=0.0min

speed=144298K/sec

md2 : inactive sdc[2](S) sde[1](S) sdd[0](S)

5363 blocks super external:imsm

md127: active raid1 sdb[1] sda[0]

1048576 blocks super external:/md1/0 [2/2] [UU]

md1 : inactive sdb[1](S) sda[0](S)

2210 blocks super external:imsm

(59)

When t he rebuild has com plet ed:

Personalities : [raid5] [raid1] [raid0]

md126: active raid1 sdc[2] sde[1]

1048576 blocks super external:/md2/0 [2/2] [UU]

md2 : inactive sdc[2](S) sde[1](S) sdd[0](S)

5363 blocks super external:imsm

md127: active raid1 sdb[1] sda[0]

1048576 blocks super external:/md1/0 [2/2] [UU]

md1 : inactive sdb[1](S) sda[0](S)

2210 blocks super external:imsm

(60)

10

SGPIO

Serial General Purpose I nput / Out put ( SGPI O) is a four signal bus used bet w een a st orage cont r oller and a backplane. The official nam e designat ed t o SGPI O is SFF- 8485 by t he Sm all Form Fact or ( SFF) Com m it t ee. SGPI O provides t he capabilit y of blinking LEDs on disk arrays and st orage back planes t o indicat e st at uses.

10.1

SGPIO Utility

Linux uses t he ut ilit y SGPI O t o cont rol t he LEDs on a hard disk drive bay enclosure. The following t able describes t he opt ions t he SGPI O ut ilit y provides:

Table 7 SGPIO Utility Options

- h, - - help Displays t he help t ext

- V, - - version Displays t he ut ilit y version and AHCI SGPI O specificat ion version

- d, - - disk Disk nam e of LED locat ion. i.e. sda, sdb, sdc. Can be com m a delim it ed list

- p, - - port SATA port num ber of LED locat ion. Can be used w hen a disk nam e is no longer valid. i.e. 0, 1, 2, 4. Can be com m a delim it ed list

- s, - - st at us The LED st at us t o set t o: locat e, fault , rebuild, off

- f, - - freq The frequency of t he LED blinking in Hz bet w een 1 and 10. For exam ple, t he following com m and set s sda, sdb, and sdc LEDs t o fault w it h a frequency of 3Hz Flash rat e:

(61)
(62)

The table below shows all the “patterns” that can be specified:

Pattern Name Usage

locat e Turns locat e LED on for given device( s) or associat ed em pt y slot ( s) .

locat e_off Turns locat e LEDs off.

norm al Turns st at us, failure, and locat e LEDs off.

off Turns st at us and failure LEDs off.

ica or degraded Display “in a critical array” pattern. rebuild or

rebuild_p

Display “rebuild” pattern.

ifa or failed_array

Display “in a failed array” pattern.

hot spare Display “hotspare” pattern.

pfa Display “predicted failure analysis” pattern.

failure or disk_failed

(63)

10.3

Ledmon Service

Ledm on is a daem on service t hat m onit ors t he st at e of MDRAI D devices or a block device. The service m onit ors all RAI D volum es. There is no m et hod t o specify individual volum es t o m onit or. Like ledct l, ledm on has only been verified w it h I nt el® st orage cont r ollers.

Ledm on can be run wit h t he following opt ions list ed below :

Option Usage

- c or - - config-pat h=

Set s t he configurat ion file pat h. This overrides any ot her

configurat ion files. ( Alt hough t he ut ilit y current ly does not use a config file) .

- l or - - log- pat h Set s t he pat h t o a log file. This overrides / var/ log/ ledm on.log. - t or - - int erval= Set s t he t im e int erval in seconds bet w een scans of t he sysfs. A

m inim um of 5 seconds is set . quiet , error,

-- -- warning, -- -- info, - - debug, - - all

Specifies verbosit y level of t he log - 'quiet ' m eans no logging at all, and 'all' m eans t o log everyt hing. The levels are given in order. I f user specifies m ore t han one verbose opt ion t he last opt ion com es int o effect .

- h or - - help Print s help t ext and exit s.

(64)

11

SAS Management Protocol

Utilities

sm p_ut ils is a set of com m and line ut ilit ies t hat are used t o invoke SAS Managem ent Prot ocol ( SMP) funct ions t o m onit or and m anage SAS expanders. More inform at ion about sm p_ut ils, package cont ent s and usage exam ples can be found at :

ht t p: / / sg.danny.cz/ sg/ sm p_ut ils.ht m l

Below are som e helpful com m ands t hat are described t oget her w it h usage exam ples.

11.1

smp_discover

sm p_discover ut ilit y sends t he SMP DI SCOVER Request t o a SMP Target . I t m ay be used t o check w hat devices are at t ached t o t he HBA or an expander.

11.1.1

Examples

Finding an HBA:

ls -l /dev/bsg/sas_host*

crw-rw---- 1 root root 253, 1 May 16 16:32 /dev/bsg/sas_host6

crw-rw---- 1 root root 253, 2 May 16 16:32 /dev/bsg/sas_host7

(65)

To see w hat is connect ed t o t he sas_host 6:

smp_discover /dev/bsg/sas_host6

Discover response:

phy identifier: 0

attached device type: expander device

negotiated logical link rate: phy enabled; 3 Gbps

attached initiator: ssp=0 stp=0 smp=1 sata_host=0

attached sata port selector: 0

attached target: ssp=0 stp=0 smp=1 sata_device=0

SAS address: 0x5001e6734b8d2000

attached SAS address: 0x50000d166a80e87f

attached phy identifier: 0

programmed minimum physical link rate: not programmable

hardware minimum physical link rate: not programmable

programmed maximum physical link rate: not programmable

hardware maximum physical link rate: not programmable

phy change count: 0

virtual phy: 0

partial pathway timeout value: 0 us

(66)

To probe t he PHYs at t ached t o t he host cont roller:

smp_discover -m /dev/bsg/sas_host6

Device <5001e6734b8d2000>, expander:

phy 0:D:attached:[50000d166a80e87f:00 exp i(SMP) t(SMP)] 3 Gbps

phy 1:D:attached:[50000d166a80e87f:00 exp i(SMP) t(SMP)] 3 Gbps

phy 2:D:attached:[50000d166a80e87f:00 exp i(SMP) t(SMP)] 3 Gbps

phy 3:D:attached:[50000d166a80e87f:00 exp i(SMP) t(SMP)] 3 Gbps

I n t his exam ple sas_host 6 is connect ed t o an expander device w it h all 4 phys. Such configurat ion creat es a w ide port .

To see w hat expander is connect ed t o sas_host 6, a sim ple check in sysfs can be perform ed:

ls /sys/class/bsg/sas_host6/device/port-6:0/

(67)

To see w hat is connect ed t o t he expander - 6: 0:

phy 4:T:attached:[5000c50017ae9815:00 t(SSP)] 3 Gbps

phy 5:T:attached:[5000c5000051fe39:00 t(SSP)] 3 Gbps

phy 6:T:attached:[5000c50005f437a9:00 t(SSP)] 3 Gbps

phy 7:T:attached:[5000c50005f4373d:00 t(SSP)] 3 Gbps

phy 8:T:attached:[5000c50001cd61d1:00 t(SSP)] 3 Gbps

phy 9:T:attached:[5000c50023c799a1:00 t(SSP)] 3 Gbps

phy 10:T:attached:[5000c50001ab182d:00 t(SSP)] 3 Gbps

phy 11:T:attached:[5000c50005fba135:00 t(SSP)] 3 Gbps

phy 12:T:attached:[5000c5000490c705:00 t(SSP)] 3 Gbps

phy 13:T:attached:[5000c5000051fca9:00 t(SSP)] 3 Gbps

phy 14:T:attached:[5000c500076a4ab5:00 t(SSP)] 3 Gbps

phy 15:T:attached:[5000c50005fd6cb1:00 t(SSP)] 3 Gbps

phy 16:S:attached:[5001e6734b8d2000:02 i(SSP+STP+SMP)] 3 Gbps

phy 17:S:attached:[5001e6734b8d2000:03 i(SSP+STP+SMP)] 3 Gbps

phy 18:S:attached:[5001e6734b8d2000:00 i(SSP+STP+SMP)] 3 Gbps

phy 19:S:attached:[5001e6734b8d2000:01 i(SSP+STP+SMP)] 3 Gbps

phy 20:T:attached:[0000000000000000:00]

phy 21:T:attached:[0000000000000000:00]

phy 22:T:attached:[0000000000000000:00]

phy 23:T:attached:[0000000000000000:00]

phy 24:D:attached:[50000d166a80e87e:24 V i(SSP) t(SSP)] 3 Gbps

(68)

11.2

smp_phy_control

sm p_rep_m anufact urer ut ilit y sends t he REPORT MANUFACTURER I NFORMATI ON request t o a SMP Target .

device_status=0, duration=0, info=0

(69)

11.4

smp_rep_general

sm p_rep_general ut ilit y sends t he REPORT GENERAL request t o a SMP Target .

11.4.1

Examples

mp_rep_general -vvvv /dev/bsg/expander-6\:0

Report general request: 40 00 00 00 00 00 00 00 send_req_sgv4: fd=3, subvalue=0

send_req_sgv4: driver_status=0, transport_status=0

device_status=0, duration=0, info=0

din_resid=0, dout_resid=0

Report general response:

expander change count: 547

expander route indexes: 1024

long response: 0

number of phys: 25

table to table supported: 0

zone configuring: 0

self configuring: 0

STP continue AWT: 0

open reject retry supported: 0

configures others: 0

configuring: 0

externally configurable route table: 0

enclosure logical identifier <empty>

(70)

12

MDRAID Sysfs Components

Just like t he isci driver and libsas, t he MDRAI D subsyst em also has sysfs com ponent s t hat provides inform at ion or can be used t o t weak behavior and perform ance. All MDRAI D devices present in t he syst em are shown in:

/ sys/block/ Exam ple:

ls -l /sys/block/md*

lrwxrwxrwx 1 root root 0 May 17 13: 26 / sys/ block/ m d126 - > ../ devices/ virt ual/ block/ m d126

lrwxrwxrwx 1 root root 0 May 17 13: 26 / sys/ block/ m d127 - > ../ devices/ virt ual/ block/ m d127

Mapping bet ween a device num ber and it s nam e can be found:

ls -l /dev/md/

t ot al 0

(71)

Md devices in / sys/ block are sym bolic links point ing t o t he / sys/ devices/ virt ual/ block. All MD Devices are in t he ‘md’ subdirectory in /sys/devices/virtual/block/mdXYZ direct ory. I n t he m d direct ory t he follow ing cont ent s can be found:

(72)
(73)

- rw - r- - r- - 1 root root 4096 May 18 13: 10 sync_force_parallel - rw - r- - r- - 1 root root 4096 May 18 13: 10 sync_m ax

(74)

Gambar

Table 1. RAID 0 Overview
Table 2. RAID 1 Overview
Table 3. RAID 5 Overview
Table 4. RAID 10 Overview
+7

Referensi

Dokumen terkait

[r]

Sesuai dengan yang dipersyaratkan dalam Dokumen Lelang, pada saat Pembuktian Kualifikasi Calon Penyedia Agar membawa :.. Dokumen Asli Perusahaan dan

Dari analisis tersebut dapat diketahui bahwa indeks keanekaragaman plankton di Danau Lais Kecamatan Kahayan Tengah Kabupaten Pulang Pisau Provinsi Kalimantan

Pada paper ini, akan dilakukan analisis menggunakan simulasi berbasis Finite Element Method (FEM) untuk menganalisis distribusi medan pada isolator gantung berbahan

Pendaftaran hak atas tanah menurut Pasal 19 UUPA ditujukan kepada pemerintah agar melakukan pendaftaran tanah-tanah di seluruh wilayah Republik

Formulir Penjualan Kembali Unit Penyertaan yang telah lengkap dan diterima secara baik ( in complete application ) serta telah memenuhi persyaratan dan ketentuan yang tercantum

Menguji pengaruh pertumbuhan laba terhadap kebijakan dividen pada perusahaan yang sahamnya terdaftar dalam Indeks Saham Syariah Indonesia (ISSI).. 1.4

Penelitian ini diharapkan agar pemerintah dapat melakukan penerapan keterbukaan informasi publik sesuai peraturan Undang – Undang Nomor 14 tahun 2008 dalam