Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way of mounting a directory where files from the container overwrite host files with docker compose watch #12510

Open
oprypkhantc opened this issue Jan 29, 2025 · 6 comments

Comments

@oprypkhantc
Copy link

oprypkhantc commented Jan 29, 2025

Description

Hey.

First of all, it seems like there already was a similar issue, but it lacked context to understand why it's important to have this implemented in some way. Which is why I'm creating another issue, sorry: #11658

Our app

Our app is a PHP app and uses Composer package manager, but all of this is also relevant for NodeJS apps as well. All packages are installed into /vendor directory, so the entire project structure looks something like this:

.
├── backend/
│   ├── src/
│   │   └── SourceFile.php
│   ├── vendor/
│   │   └── google/
│   │       └── api-client/
│   │           └── GoogleFile.php
│   ├── Dockerfile
│   ├── composer.json
│   └── composer.lock
└── docker-compose.yml

For development, each team member uses an IDE. The IDE uses files in backend/vendor/ to provide type information, auto-complete and to show sources of vendor packages whenever necessary. Moreover, since PHP is an interpreted language, sometimes we modify the files in backend/vendor/ directly to assist with debugging. Of course, any changes in backend/vendor/ are only ever done locally, during development and with full understanding that the changes are going to be gone when Composer re-installs dependencies.

docker-compose.yml

docker-compose.yml is only used for local development. Hence, it currently uses bind mounts to share the entire backend directory into the container:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
      args:
        - CONTAINER_ENV=local
    volumes:
      - ./backend:/app
    deploy:
      replicas: 2

This works, but requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes, by running something like docker compose run --rm -it app composer install or docker compose exec app composer install. It works this way:

  1. old dependencies are bind mounted from /backend/vendor to /app/vendor
  2. Composer downloads and modifies dependencies in /app/vendor
  3. bind mount syncs changes from /app/vendor back to the host to /backend/vendor
  4. other running containers see the changes on the host and propagate them too

Dockerfile

Locally, no project files are copied into the container; Dockerfile is just a base PHP image with some configuration

On production, the app runs on AWS Fargate (which means no way to mount anything), so we pre-build our application into a Docker image, with all Composer dependencies and project files.

This is how it looks:

ARG CONTAINER_ENV

FROM php:8.2.27-fpm-alpine3.21 AS base

COPY --from=composer:2.7.4 /usr/bin/composer /usr/local/bin/composer

WORKDIR /app


FROM base AS base-local

# Nothing here


FROM base AS base-production

COPY backend/composer.json /app/composer.json
COPY backend/composer.lock /app/composer.lock
RUN composer install

COPY backend/src /app/src


FROM base-${CONTAINER_ENV}

EXPOSE 22 80
CMD tail -f /dev/null

docker compose watch

Now, there are several services like these in our project. Each requires developers to keep track of lock files and re-run package managers whenever they change. This is inconvenient and creates a lot of situations that could have been avoided. It also completely means our production build works in an entirely different way from our local builds.

This is where docker compose watch helps - not only would it allow us to use the same (production) Dockerfile for all environments, but it would also eliminate all unnecessary movements developers currently have to make. So let's say we modify the above docker-compose.yml to include the watch configuration, and remove the volume:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    deploy:
      replicas: 2
    develop:
      watch:
        - action: sync
          path: ./backend
          target: /app
          ignore:
            - backend/vendor/
        - action: rebuild
          path: backend/composer.lock

This works, but now developers no longer have access to backend/vendor on the host, meaning the IDE has no idea what dependencies are installed, and neither do developers. This is a problem.

Let's say we remove the ignore: [backend/vendor/] part. Still, backend/vendor/ is not synced back to the host if it didn't exist in the first place.

Okay, let's try adding the volume back, just for the vendor directory, and ignore it for watch:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    volumes:
      - ./backend/vendor:/app/vendor
    develop:
      watch:
        - action: sync
          path: ./backend
          target: /app
          ignore:
            - backend/vendor
            - app/vendor
            - vendor
        - action: rebuild
          path: backend/composer.lock

Still broken. Now both the host and container have an empty vendor folder.

Summary

We need a way of syncing the backend/vendor folder between the host and the container, but for image built files to always overwrite the host contents.

@ndeloof
Copy link
Contributor

ndeloof commented Jan 30, 2025

AFAICT your last solution is close to a solution, you could rely on sync+exec watch action:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    volumes:
      - ./backend/vendor:/app/vendor
    develop:
      watch:
        - action: sync
          path: ./backend
          target: /app
          ignore:
            - backend/vendor
            - app/vendor
            - vendor
        - action: sync=exec
          path: backend/composer.lock
          target: backend/composer.lock
          exec:
            command: composer install

Anyway, I'm a bit confused with the initial state "This requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes" - doesn't your IDE detect updates to lock file and suggest running this command ? This actually sounds like a local workflow automation issue, as source code is synced from upstream repo, vs a compose issue

@oprypkhantc
Copy link
Author

oprypkhantc commented Jan 30, 2025

Would love to try your solution, but it seems that sync+exec is available from docker compose v2.32, and the latest shipped docker compose with Docker for Mac currently is Docker Compose version v2.31.0-desktop.2.

It does seem like it will work, but it would also mean that composer install would run from scratch, without Dockerfile build-time cache, every time a container is up. I was looking for more of a native solution: with action: rebuild we could use a Dockerfile like this:

RUN --mount=type=cache,target=/root/.composer/cache composer install

It does seem like this could be extracted into a named volume and be mounted in docker-compose.yml, but it's still a bit more complicated than I would prefer :) Hope you get where I'm coming from.

IDE

The IDE does detect updates, and it does suggest running the install command. However:

  • not all developers see or pay attention to those notifications, especially AQAs
  • you still have to click those manually and wait, instead of just having the docker compose watch running somewhere and it doing 99% of work
  • there's currently three "sub projects" (e.g. services that are all part of a single project, stored in a monorepo), and all three of them are updated quite frequently, which makes it 3x more likely that someone will miss the notification or forget about it. So developers currently have a script they run when switching branches or pulling changes from the remote, but it is still not ideal

And, most importantly, it still does not solve the issue of unifying different Dockerfile "strategies" we use for local and production deployment. Unifying those would be great in of itself, but it would also allow running additional commands during build on local environments as part of build process, which we currently cannot do because there are no project files during Docker build on local environments. They are only copied into a container with a volume, so we have to also run these commands after containers start separately.

I get where you're coming from. But docker compose watch seems like a perfect solution that would eliminate both a separate Docker build process for local, and eliminate any "manual" part from the entire developer experience. It would be seamless and not require any additional scripts or SDLCs :)

@oprypkhantc
Copy link
Author

oprypkhantc commented Feb 3, 2025

New Docker for Mac was released, so I tried the solution you suggested. Unfortunately it still does not work. I'm not sure if the "exec" portion is even executed the first time I do docker compose up --build --watch app. But even changing the composer.lock file manually to trigger the command still doesn't result in dependencies being installed, and the vendor folder is just empty. The terminal does not show any progress or logs that composer install would output, only this:

[+] Running 3/3
 ✔ app                                 Built                                                                                                                                                                                       0.0s 
 ✔ Network compose-watch-test_default  Created                                                                                                                                                                                     0.1s 
 ✔ Container compose-watch-test-app-1  Created                                                                                                                                                                                     0.1s 
        ⦿ Watch enabled
Attaching to app-1

So I'm not sure if composer install has ever even ran. For easier reproducibility, I've prepared a tiny repo: https://github.com/oprypkhantc/compose-watch-test

You can try doing docker compose up --build --watch app and hopefully see if there's still something I'm doing wrong, or if it's an issue with Docker Compose itself. Also, as I said in the above message, I'd still be nice not to use sync+exec, since Dockerfile already does that so it'd be perfect to just use rebuild on composer.lock change somehow :)

@oprypkhantc
Copy link
Author

oprypkhantc commented Feb 3, 2025

A similar issue I've stumbled upon, but it is also kind of relevant:

I want to set up a tool called Prettier in docker-compose.yml using a Docker image, without mounting or messing with node_modules at all. E.g. treat the image as a black box. It's working good with a setup like this:

services:
  prettier:
    image: jauderho/prettier:3.4.2-alpine
    restart: no
    command: --cache --cache-location=/work/storage/tmp/.prettier-cache --cache-strategy=content --log-level warn --write .
    volumes:
      - ./:/work
    deploy:
      replicas: 0

However, there's an issue with the IDE, where it requires you to have the prettier package installed in project scope in node_modules. In other words, it wants this:

services:
  prettier:
    image: jauderho/prettier:3.4.2-alpine
    restart: no
    command: --cache --cache-location=/work/storage/tmp/.prettier-cache --cache-strategy=content --log-level warn --write .
    volumes:
      - ./:/work
      - ./node_modules:/var/lib/node_modules
    deploy:
      replicas: 0

But if I do that, the node_modules both inside the container and on the host are, expectedly, empty. To be clear: this is an issue with the IDE, and I've reported it on their end, but having some way of mounting a folder where container files takes precedence over host files and overwrites them on container up would be an okay workaround for now, but it doesn't seem to be possible.

Could it be considered to add a flag to bind mounts specifying this exact behaviour? E.g.:

services:
  prettier:
    image: jauderho/prettier:3.4.2-alpine
    restart: no
    command: --cache --cache-location=/work/storage/tmp/.prettier-cache --cache-strategy=content --log-level warn --write .
    volumes:
      - ./:/work
      - type: bind
        source: ./node_modules
        target: /var/lib/node_modules
        copy_from_container: true
    deploy:
      replicas: 0

That should solve both use cases. I understand that this isn't compose's concern, and I can request that feature on https://github.com/moby/moby side, but first I wanted to hear from you whether that would even be possible and how that'd play with docker compose watch.

@ndeloof
Copy link
Contributor

ndeloof commented Feb 5, 2025

A bind mount, by nature, replaces the container's filesystem on target path with the one from bind source. New files written by container will be actually created on host directly. If container image comes with some initial content for this mount path, it will just be hidden by the mount. This is how Unix mount works, there's no voodoo magic to be expected here, and the challenge is not about declaring a new attribute in compose.yaml but for docker engine to manage such a scenario that is contradictory to the core concepts

@oprypkhantc
Copy link
Author

I see that this is the case with bind mounts. I just thought that Docker has access to the image and would be able to copy and replace the files from the image directly to the mount. This does not seem possible in user land. I understand this might not actually be possible or feasible, and I fully get that it looks like a crutch, not a proper solution. That's just the first thing that came in mind.

But also as you can see, this is a valid use case. I'm not sure how others utilize docker compose watch for development without a feature similar to this. Similar functionality has also been asked about on Stackoverflow as far back as 2017:

https://stackoverflow.com/questions/47664107/docker-mount-to-folder-overriding-content
https://stackoverflow.com/questions/42848279/how-to-mount-volume-from-container-to-host-in-docker
https://stackoverflow.com/questions/66724297/docker-compose-volume-copying-folder-from-docker-container-to-host-when-executi

So it seems that a named volume is closer to a solution than a bind mount, but still doesn't really work:

  • you have to create a directory on the host for the volume manually
  • there's no way to drop the volume on image change, or as a watch action
services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    volumes:
      - ./backend:/app
      - type: volume
        source: backend-vendor
        target: /app/vendor
    develop:
      watch:
        - action: rebuild
          path: backend/composer.lock

  cli:
    image: composer:2.7.4
    working_dir: /app
    volumes:
      - ./backend:/app

volumes:
  backend-vendor:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./backend/vendor

Would named volumes maybe be a better starting point? Maybe something like:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    volumes:
      - ./backend:/app
      - type: volume
        source: backend-vendor
        target: /app/vendor
    develop:
      watch:
        - action: rebuild
          path: backend/composer.lock
+       - action: host_exec
+         command: rm -rf backend/vendor

  cli:
    image: composer:2.7.4
    working_dir: /app
    volumes:
      - ./backend:/app

volumes:
  backend-vendor:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./backend/vendor
+     create_host_path: true

Still looks like a workaround, especially the host_exec. I understand that host_exec is likely never going to happen, but maybe that'll give you a better idea of what I'm trying to achieve. There may be other ways too :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants