-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
volume mount break Pod startup #3809
Comments
Knative will mount an empty log folder at the path
|
Knative needs to create a volume mounted at /var/log so that a fluentd sidecar can collect logs. This is going to wipeout anything that is pre-configured at this directory. It is not clear to me that this is really a Knative bug, but rather an incompatibility between the container as packaged and the platform. It is possible to get the nginx container to run without rebuilding the image by simply updating Cmd/Args to do something like If you are rebuilding the nginx image anyways to update other configuration, you can also update the logging directory to remove the nginx sub-directory, or update the default command. I do see how it would be slick to run nginx on Knative with little to no arguments, but I am not sure that will be a common enough use-case outside of a demo to warrant adding additional configuration/flags around fluentd mount behavior. |
But there is no fluentd sidecar here. fluentd is usually running as a daemonset cluster wide and collects logs automatically without having to run it as a sidecar. |
The fluentd sidecar is controlled by Running fluentd as a side-car has the advantage of being able to process separate log streams from a single container versus just stdout/stderr. Processing these log files is done through a shared mounted volume. There is a proposal to move our sidecar usage to a daemon set for both stdout/stderr logs as well as I could see an argument for disabling the volume mount when varlog collection is disabled; however, that has the potential to break other applications that assume that /var/log already exists and is writable. Volume mounts seem like the best way Knative can ensure that /var/log exists and is writable as specified in our runtime contract. https://github.com/knative/serving/blob/master/docs/runtime-contract.md#default-filesystems |
I commented in #818 , I agree with the premise that sidecars are "resource heavy", specifically I think it will affect performance a lot (i.e cold start). While running I don't understand this statement:
To me the sidecar should definitely be an option that is configurable (like now, great), but that option being off should not touch the file system at all. It should be an opt-in , where users know the consequences and will prepare their images accordingly. |
Our options here seem like:
Option 1 has the advantages of a consistent user experience regardless of which log collection method is used and getting an application to work in Knative is done upfront. This makes it portable across Knative installations regardless of configuration. The downside to option 1 is what we are experiencing here. Applications that expect files or subdirectories to exist in /var/log at startup will not work out of the box. Option 2 has the advantage that containers like nginx will launch on Knative out of the box, but will break if the volume mount is enabled. As log collection is currently a cluster-wide setting this is a bit concerning. The disadvantage is portability across Knative installs is reduced since cluster settings are unlikely to be the same. Option 3 has the advantage of consistent use experience and containers like nginx will run out of the box. However, it removes functionality that already exists and makes it difficult to run containers that output multiple log files (i.e. request, application, error, etc.). Also containers that try to write to I think this might be a good topic to bring up in the API working group to see if others have any additional input or ideas here. |
Yes, because now a Container object can be used in a Service template. See: https://github.com/knative/serving/blob/master/docs/spec/spec.md#service |
oh, i able to run apply in 0.52 but not in 0.6.0 . It throws below error
May I know which version are you tested on privileged mode? |
This seems like a duplicate of #2142 |
Now (#4156) that there is no fluentd sidecar we could also get rid of the emptyDir. If there is no volume mounted to This way, we wouldn't overwrite anything inside The better solution would be to have k8s support mounting emptyDir without erasing the original content. |
While I do like that the solution gives the best runtime behavior of not overwriting Until we have a solution like above that is container runtime agnostic I would rather us do something else to enable this. I propose the following: Allow users to add a label, Precedent: This is similar to our cluster-local. FAQ: Q. Why a label instead of adding it into the spec? Q. What happens when we get support for mounting an emptyDir without erasing original content? Q. Why an opt-out over an opt-in? Q. Can we name the label key or value something else? @sebgoa @greghaynes @JRBANCEL Thoughts? |
Issues go stale after 90 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle stale |
Stale issues rot after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle rotten |
Rotten issues close after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /close |
@knative-housekeeping-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen It would be extremely beneficial to be able to disable /var/log overwriting. This problem appears to break any app using nginx to front a service. There are tons of them out there. Sidecars are fine. But DO NOT modify my containers. This steps outside of Knative area of concern. Knative should serve up an untouched container. Logging is far down the list of importance if the application cannot be deployed at all. This is a pretty urgent issue and it needs to be addressed quickly. I'm new to the Knative community but this is just broken. Knative overwriting any directory in a container is a violation of trust that container integrity will not be violated by the infrastructure. |
@ReggieCarey: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen There were discussions about removing this feature since as you mentioned it really is out of scope of Knative. /cc @dprotaso |
@JRBANCEL: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
what is the status here? we are demoing our platform which is heavily relying on knative and all our customers stumble into this problem, making our statement that "it just works by deploying a knative service" look foolish. How can we be sure that containers are left untouched? This is unworkable |
/lifecycle frozen |
@Morriz we'll phase out the behaviour that's breaking these containers - tl;dr we will put it behind an operator feature flag and change the default after enough time and communication |
/assign @dprotaso |
@dprotaso: You must be a member of the knative/knative-milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your and have them propose you as an additional delegate for this responsibility. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In what area(s)?
/area API
What version of Knative?
HEAD
Expected Behavior
Run the
nginx
container as a service by specifying a non default port (.e port 80). I expected the service to run successfully and be able to access the nginx welcome page.Actual Behavior
The Pod did not start due to volume mount issues :
Pod manifest:
Steps to Reproduce the Problem
The text was updated successfully, but these errors were encountered: