Ohboy. Tonight I:

  • installed a cool docker monitoring app called dockge
  • started moving docker compose files from random other folders into one centralized place (/opt/dockers if that matters)
  • got to immich, brought the container down
  • moved the docker-compose.yml into my new folder
  • docker compose up -d
  • saw errors saying it didn’t have a DB name to work with, so it created a new database

panik

  • docker compose down
  • copy old .env file from the old directory into the new folder!
  • hold breath
  • docker compose up -d

Welcome to Immich! Let’s get started…

Awwwwww, crud.

Anything I can do at this point?

No immich DB backup but I do have the images themselves.

EDIT: Thanks to u/atzanteol I figured out that changing the folder name caused this too. I changed the docker folder’s name back to the original name and got my DB back! yay

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    23 days ago

    Docker compose has a default “feature” of prefixing the names of things it creates with the name of the directory the yml is in. It could be that the name of your volume changed as a result of you moving the yml to a new folder. The old one should still be there.

    docker volume ls

    • perishthethoughtOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      23 days ago

      Hmmm…

      docker volume ls 
      DRIVER    VOLUME NAME
      local     1da54fed5d479f5a551aaf853999fcc3db659193df2643a2bf20470f4da06bee
      local     (a bunch more like the above)
      ...
      local     immich-app_model-cache
      local     immich-app_pgdata
      local     immich-app_tsdata
      local     immich_model-cache
      local     immich_pgdata
      

      I’m not sure how to tell what the many volumes with names like guids could be from. (I have like 12 docker apps running here)

      My docker compose yml file also has:

      database:
          container_name: immich_postgres
          image: tensorchord/pgvecto-rs:pg14-v0.2.0
          env_file:
            - .env
          environment:
            POSTGRES_PASSWORD: ${DB_PASSWORD}
            POSTGRES_USER: ${DB_USERNAME}
            POSTGRES_DB: ${DB_DATABASE_NAME}
          volumes:
            - pgdata:/var/lib/postgresql/data
      

      I think my problem is that I didn’t have the proper .env file the first time I started it up after moving the yml file, and that’s why immich thought it neded to create a new database from scratch. Does that make sense? I think it’s realy overwritten those

      • Lem453@lemmy.ca
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        23 days ago

        Is it not in the immich_pgdata or immich-app_pgdata folder?

        The volumes themselves should be stored at /var/lib/docker/volumes

        For future reference, doing operations like this without backing up first is insane.

        Get borgmatic installed to take automatic backups and send them to a backup like another server or borgbase.

        • perishthethoughtOP
          link
          fedilink
          English
          arrow-up
          16
          ·
          23 days ago

          OMG! Yes!!!

          I thought it would be good to make the folder name shorter when I moved it, so it went from immich-app before, to immich.

          I just now brought it down, renamed the folder, brought it back up and my DB is back again!

          Thank you so much. <3

          I weill check out borgmatic too. Cheers,

          • Lem453@lemmy.ca
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            23 days ago

            Awesome, take this close call as a kind reminder from the universe to backup!

            Borg will allow incremental backups from any number of local folders to any number of remote locations. Borgmatic is a wrapper for it that also includes automated incremental borg backups.

            I have a second server that runs this container: nold360/borgserver

            Which works as a borg repository.

            I also buy storage in borgbase and so every hour and incremental setup goes to both.

            The other day I blew away a config folder by accident and restored it with no sweat in 2 mins.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            23 days ago

            Glad you sorted it!

            It’s very unexpected behavior for docker compose IMHO. When you say the volume is named “foo” it creates a volume named “directory_foo”. Same with all the container names.

            You do have some control over that by setting a project name. So you could re-use your old volumes with the new directory name.

            Or if you want to migrate from an old volume to a new one you can create a container with both volumes mounted and copy your data over by doing something like this:

            docker run -it --rm -v old_volume:/old:ro -v new_volume:/new ubuntu:latest 
            $ apt update && apt install -y rsync
            $ rsync -rav --progress --delete /old/ /new/ # be *very* sure to have the order of these two correct!
            $ exit
            

            For the most part applications won’t “delete and re-create” a data source if it finds one. The logic is “did I find a DB, if so then use it, else create a fresh one.”

            • Lem453@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 days ago

              This is one of the reasons I never use docker volumes. I bind mount a local folder from the host or mount and NFS share from somewhere else. Has been much more reliable because the exact location of the storage is defined clearly in the compose file.

              Borg backup is set to backup the parent folder of all the docker storage folders so when I add a new one the backup solution just picks it up automatically at the next hourly run.

              • atzanteol@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                22 days ago

                I have a similar distrust of volumes. I’ve been warming up to them lately but I still like the simple transparency of bind mounts. It’s also very easy to backup a bind mount since it’s just sitting there on the FS.

  • pe1uca@lemmy.pe1uca.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    23 days ago

    Glad to see you solved the issue, I just want to point out that this might happen again if you forget your db is in a volume controlled by docker, better to put it in a folder you know.

    Last month immich released an update to the compose file for this, you need to manually change some part.
    Here’s the post in this community https://lemmy.ml/post/14671585

    Also I’ll include you this link in the same post, I moved the data from the docker volume to my specific one without issue.
    https://lemmy.pe1uca.dev/comment/2546192

    Or maybe another option is to make backups of the db. I saw this project some time ago, haven’t implemented it on my services, but it looks interesting.
    https://github.com/prodrigestivill/docker-postgres-backup-local