-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(metadata): use manifest to drive config #1701
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
subPath: metadata.env | ||
- name: config-volume | ||
readOnly: true | ||
mountPath: /aggregate_config.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to do one thing at a time, but in conjunction with this PR we should update the cdis-manifest
entries as needed add an aggregate_config.json
file per-commons. I'll work on the automated process/init process for importing data automatically on a roll or whatever, but the immediate change would be...
kubectl exec $(gen3 pod metadata) -- python /src/src/mds/populate.py --config /aggregate_config.json --hostname esproxy-service --port 9200
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have this in an init-container instead? The init-container can run this before starting the service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's no config then init container will exit 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I think you're right. I was going to wait to do https://ctds-planx.atlassian.net/browse/HP-287, but I think I may as well do this while I'm updating the rest of it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d056552
to
facc32d
Compare
Took a little longer (a bit of bash trial and error) but see latest @jawadqur. Updates were made to add an init container like you suggested. Seems to be working properly to exit 0 if not configured properly or run the import as needed so that a |
facc32d
to
ef5fe5f
Compare
3c175f6
to
cc98989
Compare
This feature is more complicated than can be easily done for the current PR and needs more consideration
Taking functionality for https://ctds-planx.atlassian.net/browse/HP-287 out of this PR. This work is more complex than should be taken on here and impacting the scope. We can't easily run the agg MDS import on post init or in an init container because an agg MDS may refer to its own non-agg MDS as a datasource. Chicken and the egg. This probably best belongs in application code and not in k8s config |
Makes sense, let's revisit later. |
@themarcelor @mfshao regarding our offline conversation around backwards compatibility, I've found that using optional volumes for k8s does seem to work as expected in an array of situations and I believe this PR now covers the current state and future state TestingThe various configuration mechanisms were toggled/deleted while I did a gen3 roll all to validate the health of metadata. All scenarios worked without issue. The metadata.env changes that are now deprecated were like so...
g3auto secrets and configs would be synced like so (using the appropriate branch) ...
Configs and secrets were cherry-picked for deletion and verified like so ...
Service status was verified like so ...
|
Jira Ticket: HP-291, HP-320
New Features
Improvements
Deployment changes
USE_AGG_MDS
andAGG_MDS_NAMESPACE
fromGen3Secrets/g3auto/metadata/metadata.env
and set those variables in amanifest: {}
block inmanifest.json
metadata/aggregate_config.json
pathgen3 kube-setup-metdata
and roll the metadata service in Kubernetes