Форумы

Geek's Notes Universe :: Форумы :: Filesystem

How to mount/restore LVM volumes

root
[ Истинный ДАО ]
Сообщений: 493
mount: unknown filesystem type ‘LVM2_member’

i foobar’d a fedora 4 machine a couple of days ago and when i went to mount the disk into another machine i get this beautiful error:

mount: unknown filesystem type 'LVM2_member'

here was the syntax used:

termcb:~ # mount /dev/hda2 /mnt/old/
mount: unknown filesystem type 'LVM2_member'


and here was the drive :

termcb:~ # fdisk -l


Disk /dev/hda: 30.0 GB, 30005821440 bytes
255 heads, 63 sectors/track, 3648 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 3648 29198137+ 8e Linux LVM


so using lvm2 tools, we do a disk scan:

termcb:~ # lvmdiskscan
/dev/ram0 [ 62.50 MB]
/dev/ram1 [ 62.50 MB]
/dev/hda1 [ 101.94 MB]
/dev/sda1 [ 39.19 MB]
/dev/ram2 [ 62.50 MB]
/dev/hda2 [ 27.85 GB] LVM physical volume
/dev/sda2 [ 2.01 GB]
/dev/ram3 [ 62.50 MB]
/dev/sda3 [ 60.00 GB]
/dev/ram4 [ 62.50 MB]
/dev/sda4 [ 86.96 GB]
/dev/ram5 [ 62.50 MB]
/dev/ram6 [ 62.50 MB]
/dev/ram7 [ 62.50 MB]
/dev/ram8 [ 62.50 MB]
/dev/ram9 [ 62.50 MB]
/dev/ram10 [ 62.50 MB]
/dev/ram11 [ 62.50 MB]
/dev/ram12 [ 62.50 MB]
/dev/ram13 [ 62.50 MB]
/dev/ram14 [ 62.50 MB]
/dev/ram15 [ 62.50 MB]
0 disks
21 partitions
0 LVM physical volume whole disks
1 LVM physical volume


then we do a lvdisplay so we can get the LV Name and VG Name:

termcb:~ # lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID WBjpH6-Jezl-aI1z-XVSp-WzvW-qTDi-c1jkUv
LV Write Access read/write
LV Status NOT available
LV Size 26.06 GB
Current LE 834
Segments 1
Allocation inherit
Read ahead sectors 0

--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID Zykaw3-WBHU-oink-38W9-KylN-7u5j-PKx8qT
LV Write Access read/write
LV Status NOT available
LV Size 1.75 GB
Current LE 56
Segments 1
Allocation inherit
Read ahead sectors 0


and also a vgdisplay to make sure it was the right drive:

termcb:~ # vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 27.84 GB
PE Size 32.00 MB
Total PE 891
Alloc PE / Size 890 / 27.81 GB
Free PE / Size 1 / 32.00 MB
VG UUID ryYRi4-mXOd-XFaW-4xnR-h1cl-hphh-5QQnlM


so at this point i re-tried to mount but using the VG Name:

termcb:~ # mount /dev/VolGroup00/LogVol00 /mnt/old/
mount: special device /dev/VolGroup00/LogVol00 does not exist


still failure

so i did an lvscan next to see what the status of the lv drive is:

termcb:~ # lvscan
inactive '/dev/VolGroup00/LogVol00' [26.06 GB] inherit
inactive '/dev/VolGroup00/LogVol01' [1.75 GB] inherit


[censored]…. still inactive thats when i stumbled across this post : http://www.fedoraforum.org/forum/archiv … 64964.html

now the rest was pretty self explanatory:

modprobe dm-mod


vgchange -ay


termcb:~ # lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [26.06 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.75 GB] inherit

mount /dev/VolGroup00/LogVol00 /mount/point


Success!!
LVM recovery tale


Over the weekend I had the worrying experience of losing my LVM settings and potentially all my data… a quick search on the web showed a confusing set of information, much of it for older versions of LVM and therefore rather suspect.

Well, I recovered all my data and it was really quite simple, so I’ve written up what I did in the hope that someone else, in a similar situation, will find it useful. It’s a scary thing, losing the whole hard disk and knowing that, in reality, its all there.

First the situation

I’ve got a small /boot partition as ext2, and a larger one for the root directory (2Gb, also ext2). The rest of the hard disks (nearly 120Gb) are assigned to a volume group called, descriptively, system… (which is SuSE’s idea of a default name). More accurately, they were supposed to be assigned to it. At first I had just added 60Gb to the volume group, it was my first use of LVM and I was hedging my bets. After 6 months of trouble free operation I decided to add another 60Gb of disk, which I did 3 months ago. Except that, although the physical and volume group managers all agreed that the volume group had 120Gb, the logical volume manager insisted that there was only 60Gb. I’d used Yast2 to create and add the volumes. I tried every combination of commands I could think of to get the logical volume manager to recognise the additional space but it wouldn’t. At the time, I was busy, so forgot about it, then last week I realised that I wanted to use the space so settled down to do something about it.

The problem

So, it seemed the best solution would be to remove the second partition that I had added (/dev/hdd1) from the physical volume manager and then add it back. It wasn’t recognised so wouldn’t be missed, right? Wrong! pvremove /dev/hdd1 removed the label from /dev/hdd1 but also from /dev/hda7 (which was the original partition and full of data). pvscan and pvs reported no physical volumes on the disk. vgscan and vgs couldn’t find any volume groups. lvscan and lvs were non-starters obviously.

The rather surreal thing was, the whole system kept on running quite nicely, X Server and KDE desktop and all, but I knew that as soon as I rebooted the system would be toast.

First I tried adding the partition back to the volume group system, but the system couldn’t find the ‘system’ group. I tried creating the physical volume again (pvcreate) but that told me that the volume already existed. It became clear that I would need to reboot and hope that the system sorted itself out, flushed the disks, resynced, whatever.

The solution

After rebooting the system wouldn’t come up, which is kind of what I had expected so I had to reboot from the SuSE Rescue disks. So now I had to think about how to recreate the physical volumes, volume group and logical volume and do it with the data intact. (I have daily backups but the thought of restoring the whole system, applications and data, was not too exciting, especially as I knew all the data was there and intact. With a ‘regular’ hard disk partition that had got lost I could scan the disk for potential disk partitions and restore them. But that wouldn’t work with LVM.

On a search through various sites, I found one that mentioned the importance of saving a copy of the volume group parameters to a file using vgcfgbackup. This file could then be used to restore the parameters later, assuming that the underlying physical structure hadn’t changed. Well, the physical layout hadn’t changed but unfortunately I hadn’t created a backup of the volume group parameters (the ‘descriptor area’ to use the technical term) so that didn’t seem to hopeful. I poked around in the /etc directory (I still had the ‘/’ partition remember, as that was on its own ext2 partition) and noticed that there was a /etc/lvm/backup/ directory and a /etc/lvm/archive/ directory. Further investigation and I found that these are automatically created by LVM whenever changes are made to the system.

Unfortunately, all the messing around I had done had created a non-working version of the system file and the archive files didn’t seem to be recent enough. But, I remembered that I had a backup of the system files (going back 6 months in fact) and so I dug out a copy of the /etc/lvm/backup/sysem file and used that.

Here is what I did: First find out the old UID’s of the partitions, this is in the /etc/lvm/backup/system file. They are quite long… make sure you get the UID for the physical volumes.
$pvcreate -u sdSD-2343-SD939-adIda2 /dev/hda6 $pvcreate -u dk33kd-929293nd-adfja298a /dev/hdd1 $vgcreate -v system /dev/hda7 /dev/hdd1 $vgcfgrestore -f /etc/lvm/backup/system system


and lo!, all data present and correct!

In fact, I just rebooted the system and was back where I had started with the additional benefit of an extra 60Gb of disk space, because now I had the extra partition properly included.

[Note: in the lines using pvcreate... above I could have used: $ pvcreate --restorefile /etc/lvm/backup/system to automatically find the ID's but I hadn't realized that at the time. Without the UID's then the vgcfgrestore will not find the physical volumes that it needs to recreate the volume group.] The lesson

Don’t panic! Keep a safe copy of your /etc/lvm/ files! Make sure that you have a Rescue disk that understands the LVM system!

Apart from the above disaster, which seems to have sorted itself out very easily, I have had no trouble with the LVM system. At first I was worried that if there was a failure it would lose everything. There is something very comforting about a simple ext2 (or FAT) partition in that I know it can just be hacked at the bit level and rebuilt. Something like LVM, which is logical volumes on top of volume groups on top of physical volumes is impossible to rebuild ‘by hand’ so I’m learning to trust technology a bit.

[ Редактирование Mon Nov 18 2013, 02:54PM ]

Модераторы: root, andrey, AndreikA, faerie.qveene, slavok

<< Предыдущая тема | Следующая тема >>

Перейти:     Наверх