libera/#devuan-dev/ Saturday, 2020-03-21

RyushinI've upgraded two systems from ascii to beowulf that have ZFS pools.  It seems that a package is changing the /dev/disk/by-id/wwn-0x500* names to something different.  So these devices change during the upgrade and any devices used by ZFS disappear and ZFS suspends the pool.  I have quite a few more servers to upgrade.  I though eudev would be the package that would cause this, so I just upgraded eudev by itself and everything was fine.17:22
RyushinSo I proceeded to do the full upgrade and I experience the same issue.17:22
RyushinAnyone know which package controls the /dev/disk/by-id/wwn-0x500* names?17:23
masonRyushin: It should be safe to use UUID, and I personally prefer using GPT labels for vdevs.17:37
masonRyushin: Also, it's be better asking in either #devuan or #zfsonlinux, FWIW. (And I'm in both.)17:37
masonRyushin: Last bit is if you're using whole disks or partitions. Partitions are far more flexible nowadays.17:38
Ryushinmason: I'm using whole disks.  Thing is, the /dev/disk/by-id name should not change.17:42
RyushinAnother server I'm running has 182 hard drives.  So special care will be needed for that one.  I think I'm going to have to convert the pools to use the ATA names and not the wwn names before doing the upgrade.17:44
masonRyushin: That seems safe. It also seems entrenched enough that a migration to using partitions would be a large amount of work.17:47
masonRyushin: I'd recommend (assuming you have a maintenance window in which you can do this) booting from a rescue environment you set up for the purpose, with the new version of the system, so that you can see firsthand how devices will present themselves.17:48
Ryushinmason:  Yea, I'll just boot to rescue environment and change the SATA devices to use ATA.  The SCSI devices names should not change.17:57
RyushinI was just bringing it to the attention of the devs that something is wrong.17:57
RyushinI was going to open a bug, but I did not know which package to put the bug against.17:57
RyushinI also personally prefer to give whole disks to ZFS.  Makes things a lot easier.17:58
masonRyushin: That depends on where you got the packages. I use upstream packages I build from the instructions at https://github.com/openzfs/zfs/wiki/Custom-Packages  (with a couple corrections) but if you're using Debian's packages, a bug against the Debian packages would make the most sense.17:59
masonRyushin: easier except in situations like this - partition labels have never failed me17:59
RyushinIt does not seem to happen against Debian.  I thought it was the transition to eudev that might have caused it.17:59
masonOh, that's possible, in which case the best bug would be in the Devuan BTS against eudev.18:00
RyushinAt least the Debian test environment that I used to validate zfs bugs before I post them to Debian.18:00
masonDo at least consider /dev/disk/by-partlabel/foo for future deployments.18:00
RyushinI thought eudev was specific to devuan.18:01
Ryushinyep.  apt-cache show eudev:  Maintainer: Devuan Dev Team18:02
mason"Oh, that's possible, in which case the best bug would be in the Devuan BTS against eudev."18:04
RyushinI'll just post the bug to that then.  If it's wrong, they should be able to pinpoint it from that.  Thanks mason.18:05
masonSounds reasonable. Good luck with it. Should be a relatively straightforward change in your pools, if tedious.18:05
Ryushinmason:  Booting in the rescue environment should be fairly quick.  Just export the pool.  Remove the wwn entries, then "zpool import -d /dev/disk/by-id tank" and I should be good to go.18:09
masonYeah, that seems reasonable. Maybe test it on a smaller system if you can, and not that 182-drive one. :P18:11
LeePengnu_srs: Do you think this zfs issue ^^ is eudev related?19:24
LeePenSee #412.19:24

Generated by irclog2html.py 2.17.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!