Hey people! It seems I have some really messed up fstab or anything else, as Windows tried to do “disk repair”.

Now after decrypting my LUKS storage it seems is tries to mount a nonexistent Windows partition and always fails.

I am using default BTRFS on Fedora Kinoite.

Has anyone an idea how to fix this? Thanks!

Update, Solution found!

I literally had the external Windows drive mounted to a subdirectory of Home, so as it wasnt there for some weird reason nothing loaded?

Will try to use the nofail flag, thanks @rotopenguin@infosec.pub for the tip!

  • tal@lemmy.today
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    8 months ago

    Now after decrypting my LUKS storage it seems is tries to mount a nonexistent Windows partition and always fails.

    I have never used Kinoite, nor a LUKS-based environment, but if you have a mount point in your /etc/fstab and mounting it fails at boot, at least Debian will go into emergency mode. I wouldn’t be surprised if Kinoite does too.

    If you have an entry there for the Windows partition, enter emergency mode, use an editor like nano or whatever you’re comfortable with to edit that file, and comment the windows mount line out with a leading pound sign. Save, and run shutdown -r now to reboot and hopefully you should come up to a sane Linux environment, sans the Windows mount.

    It’s possible that whatever drive Windows is on – which I assume is internal – is dying. That could cause Windows to fail a repair, to be unable to boot, and your Linux distro on your external drive not to be able to find your Windows partition.

    If you can get into your Kinoite environment, you can run lsblk to see a list of drives that it can see. If the internal drive doesn’t show up, that’d be a a pretty good red flag that something’s wrong with it.

    EDIT: I’ll also add that if that’s the problem, this is the second time someone’s posted a request for help in a couple days that I’ve responded to related to a drive failing, where it wasn’t obvious to the user that the drive was failing, and in diagnosing the problem, they interpreted the problem as software on the computer producing problems with their drive (one person decided that Arch was likely at fault and put Debian on their system, and here Windows is catching flak). I think that there’s a legit argument that PCs need to do a better job of handling drive failure from a UI standpoint, as the symptoms that a failing drive can produce are not always obvious:

    • Lengthy delays when an OS tries accessing a drive.

    • A drive not being visible to the computer.

    • Various programs failing when trying to work with the drive.

    • Failure to boot.

    A number of things occur to me that could be helpful:

    • A BIOS could remember a brief description of the last media it successfully booted from. If it cannot find any valid boot media, it could display a message saying that it cannot see the drive with the given manufacturer, model name, and capacity and suggesting that the drive may have failed. That won’t catch every situation, like if there are multiple sources of bootable media present, but I think that for a lot of users, that’d direct them down the right path.

    • If there has been no hard power down (i.e. to cold-remove a drive), a BIOS has a drive plugged into a controller that it knows does not support hot-swap capabilities (e.g. an on-board ATA/SATA/NVMe controller), and the drive is no longer visible, it’s a pretty good bet that that drive has failed. It might not be a bad idea to default to a delay at boot for N seconds showing a message about a probably failed drive. Have a BIOS option to disable “interactive diagnostic support” for headless systems.

    • Linux distros could be set up with better support for indicating to a user that a drive is failing. SMART won’t always catch it, but having SMART active on a distro by default might be a good idea. Having errors there, or I/O errors to a non-removable-media drive percolate up to something user-visible, like the notification manager, by default that suggests that the drive might be seeing physical problems might help. Normally, if I’m thinking that a drive might be failing, I go check the kernel log for I/O failures, but that’s probably not obvious to everyone, and it doesn’t default to indicating to the user that something might be broken. If SMART is saying that a drive is failing or there are I/O errors occurring to the drive, it’s probably at least reasonable to suggest to the user that they don’t want to be relying on that drive any more.

    • On the above note, having the kernel retain I/O error counts for devices (maybe it does, but I don’t recall seeing it) and logging those in system statistics daemons like sysstat and collectd might help for post-mortem analysis of drive failure (well, maybe not if the failing drive is the one where the statistics log is going).

    Those are heuristics, but I think that they aren’t likely to throw too many false positives out, and they may help users figure out why their machine isn’t happy.

    • Pantherina@feddit.deOP
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      Thanks, yes very true. I didnt know that every mount in fstab needs to succeed, so I literally didnt have the drive attached. Removing the line (what sign do you use for commenting, pound?? £ ?) Fixed everything.

  • garrett
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    8 months ago

    My first attempt to try to fix something like this would be to:

    1. Download Fedora Workstation live media. (Within Windows or some other computer that boots.)
    2. Flash it to a USB stick.
    3. Reboot to the live desktop from the USB stick. (It might require pressing F12 or some other key combo during boot.)
    4. “Try out” Fedora. (That is: do not install.)
    5. Open GNOME Disks. (I think it’s included. Otherwise, you can sudo dnf install gnome-disks to install it temporarily on the live session.)
    6. Try to mount the main filesystem that contains /etc/fstab (it should ask you for the LUKS password.
    7. Comment out the Windows mount point. Or if you want to keep it (if the partition still exists and is just “dirty” and still needs a check from Windows) add “,nofail” after “auto” to the options in the line for the mount, so your system should still boot without that mount point.
    8. Save the /etc/fstab file.
    9. Shut down the computer.
    10. Unplug USB stick.
    11. Boot computer. Linux should successfully boot… hopefully. 😉

    I’m also wondering: How did you add the Windows partition to Fedora? Was it from within Fedora’s installer (aka: “Anaconda”)? Or did you add it in a different way?

    (BTW: I use Silverblue and have a long history with Fedora. 😁)

          • garrett
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            You can set up mount points on Linux, at least in GNOME, very easily. (It’s even fully automatic for external disks.) I’d be surprised if it isn’t as easy in KDE and other desktops too.

            The problem here (at least from what it sounds like) isn’t setting up mount points. The problem is fixing an incorrect fstab on the disk that’s causing the system to hang on boot.

            (This isn’t a typical situation, which is why I also asked about how the partition was added to the system.)

      • garrett
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Good point! GNOME Disks can do this, actually. I didn’t think about that.

        (Edit: However, I think it’ll just edit the /etc/fstab of the running system. In other words, the one of the live session, not the one on the installation.)

    • Pantherina@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Windows worked normally, until it didnt. Fedora worked normally, installed for a long time.

      i wanted to access the windows storage partition from ~/Windows-SSD and set the mount point in KDE Partitionmanager. Didnt think that that would have created such a mess.

      Problem is, I have no idea how I installed Fedora, as my UEFI doesnt allow regular storage devices, just UEFI entries. No idea why, I set everything normally and even “legacy boot first” but no USB sticks shown.

      I will ask another thread on how to generate unspecified USB-boot entries.

      • superguy
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Because there’s so much crap it’s impossible to know it all.

        I guarantee this isn’t the only thing that you think you should know about but don’t.

    • Pantherina@feddit.deOP
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      thanks! Could you give me an example line, where to put that? Its the solution I like most

      • rotopenguin@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        It goes in the fourth column of the fstab, so like

        /dev/disk/by-label/Butts /mnt/pants buttfs defaults,nofail,subvol=@ss 0 0

        (love too eat spaces and any other attempt at formatting text)

  • Fleppensteyn@feddit.nl
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    8 months ago

    You probably have to boot to Windows first and let it finish the disk checking. Then make sure fast boot is turned off

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    8 months ago

    Good job fixing the problem! It’s always that thing you did that seemed to work 😹