You’ve reached the JBoss Cowboys demo repository, this demo and also the accompanying workshop are based on kitchensink quick start.
Why JBoss Cowboys, well, because of the Space Cowboys movie no less! In that movie some veteran pilots need to help younger ones with a problem with a cold war era satellite. They prove to be in good shape and very committed in the end the solve the problem.
So here you are, about to embark on a mission where you will deploy a JBoss EAP well known application 'kitchensink' on OpenShift (hence using containers) and you’re going to do it using different approaches, but all of them relying ultimately on Red Hat OpenShift S2I to build the container image.
You will start using S2I manually, then you will do automatically sing GitOps (Argo CD) starting using plain descriptors, kustomize, helm and finally helm + kustomize.
There’s no space mission without a patch, here you have the one for this mission!
Good luck.
Create a .env
in the folder where you have cloned this repository copy the next set of variables and fill the empty ones.
DEV_USERNAME=user1
DEV_PASSWORD=FILL_ME
PROJECT_NAME=kitchensink-${DEV_USERNAME}
ADMIN_USER=admin
ADMIN_PASSWORD=FILL_ME
CLUSTER_DOMAIN=cluster.example.com
SERVER=https://api.${CLUSTER_DOMAIN}:6443
GIT_SERVER=https://repository-gitea-system.apps.${CLUSTER_DOMAIN}
ARGO_SERVER=https://openshift-gitops-server-openshift-gitops.apps.${CLUSTER_DOMAIN}
GH_OAUTH_CLIENT_ID=23...
GH_OAUTH_CLIENT_SECRET=70f...
Open two terminal windows, referred to as T1 and *T2 and change to the folder where this repository has been cloned.
Run this command in T1:
. .env
export KUBECONFIG=~/.kube/config-kitchensink-admin
oc login -u ${ADMIN_USER} -p ${ADMIN_PASSWORD} --insecure-skip-tls-verify=true --server=${SERVER}
Install ArgoCD Operator, Openshift Pipelines and Quay (simplified non-supported configuration). Log in as cluster-admin and run this command from T1:
until oc apply -k util/bootstrap/; do sleep 2; done
Now, run this command in T2 before running the demo:
. .env
export KUBECONFIG=~/.kube/config-kitchensink-user
oc login -u ${DEV_USERNAME} -p ${DEV_PASSWORD} --server=${SERVER} --insecure-skip-tls-verify=true
Then go here https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/#setting-up-the-github-oauth-app_che and follow instructions to create a GH App.
And run this command to create a secret for DevSpaces to be able to create credentials for github repositories automatically.
. .env
./util/create-checluster-github-service.sh ${GH_OAUTH_CLIENT_ID} ${GH_OAUTH_CLIENT_SECRET}
Open DevSpaces with this link. Update to another org if you fork this repo.
Kick-start an IDE environment using this link:
Grant access to the organization where you have forked this repo… if you have.
Now open a terminal and start the proper lab.
At this stage you will deploy the application using S2I manually.
Now, run this command in T2:
First of all, create the project that will host the application:
oc new-project s2i-${DEV_USERNAME}
Deploy the PostgreSQL database for the kitchensink app:
oc new-app --name=kitchensink-db \
-e POSTGRESQL_USER=luke \
-e POSTGRESQL_PASSWORD=secret \
-e POSTGRESQL_DATABASE=kitchensink centos/postgresql-10-centos7 \
--as-deployment-config=false
oc label deployment/kitchensink-db app.kubernetes.io/part-of=kitchensink-app --overwrite=true && \
oc label deployment/kitchensink-db app.openshift.io/runtime=postgresql --overwrite=true
To fix the SCC issues… optional for now.
oc patch deployment kitchensink-db --type json -p '
[{
"op": "add",
"path": "/spec/template/spec/securityContext",
"value": {
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
},
{
"op": "add",
"path": "/spec/template/spec/containers/0/securityContext",
"value": {
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": ["ALL"]
}
}
}]'
Deploy the kitchensink app:
oc new-app --template=eap72-basic-s2i \
-p APPLICATION_NAME=kitchensink \
-p MAVEN_ARGS_APPEND="-Dcom.redhat.xpaas.repo.jbossorg" \
-p SOURCE_REPOSITORY_URL="${GIT_SERVER}/${DEV_USERNAME}/kitchensink" \
-p SOURCE_REPOSITORY_REF=main \
-p CONTEXT_DIR=.
Adjust the context of the application and add decoration (labels and annotation):
oc set env dc/kitchensink DB_HOST=kitchensink-db DB_PORT=5432 DB_NAME=kitchensink DB_USERNAME=luke DB_PASSWORD=secret && \
oc set probe dc/kitchensink --readiness --initial-delay-seconds=90 --failure-threshold=5 && \
oc set probe dc/kitchensink --liveness --initial-delay-seconds=90 --failure-threshold=5
oc label dc/kitchensink app.kubernetes.io/part-of=kitchensink-app --overwrite=true && \
oc label dc/kitchensink app.openshift.io/runtime=jboss --overwrite=true
oc annotate dc/kitchensink \
app.openshift.io/connects-to='[{"apiVersion":"apps/v1","kind":"Deployment","name":"kitchensink-db"}]' \
--overwrite=true
Open the web console and log in with the non-admin user and open the topology view.
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/s2i-${DEV_USERNAME}?view=graph
Let’s see why S2I is so cool.
Let’s see the build log first!
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/k8s/ns/s2i-${DEV_USERNAME}/builds/kitchensink-1/logs
This is the key:
INFO Processing ImageSource mounts: extensions
INFO Processing ImageSource from /tmp/src/extensions
>>>>>>> Running install.sh <<<<<<
Now have a look to the POD log with this command:
oc logs dc/kitchensink -n s2i-${DEV_USERNAME} | grep -B5 -A10 "Executing postconfigure.sh"
Or here:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/k8s/ns/s2i-${DEV_USERNAME}/deploymentconfigs/kitchensink
Open extensions/postconfigure.sh
and extensions/setup.cli
Show that it’s possible to replace the application on the running container.
Make a change in local file src/main/webapp/index.xhtml
, like the following:
<div>
<p>You have successfully deployed the JBoss Application in OpenShift 4.12</p> (1)
</div>
-
Here the change is 4.12
Explain the following command and run it:
oc project s2i-${DEV_USERNAME}
POD_NAME=$(oc get pod -l application=kitchensink -o json | jq -r .items[0].metadata.name)
echo "POD_NAME=${POD_NAME}"
mvn package -Popenshift
oc cp ./target/ROOT.war ${POD_NAME}:/deployments/ROOT.war
sleep 5
Test the application again and
open https://kitchensink-s2i-${DEV_USERNAME}.apps.${CLUSTER_DOMAIN}/
At this stage Argo CD will deploy the application automatically using an Application object which will obtain plain descriptors at
kitchensink-conf/basic/base
.
The first one from a folder containing some descriptors that we have obtained from the JBoss EAP 7.2 template.
Now, you have to run the next command that created the ApplicationSet object.
cat <<EOF | oc apply -n openshift-gitops -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kitchensink-basic-app-${DEV_USERNAME}
namespace: openshift-gitops
finalizers:
- resources-finalizer.argocd.argoproj.io
labels:
kitchensink-root-app: 'true'
username: ${DEV_USERNAME}
spec:
destination:
name: in-cluster
namespace: argo-${DEV_USERNAME}
ignoreDifferences:
- group: apps.openshift.io
jqPathExpressions:
- '.spec.template.spec.containers[].image'
kind: DeploymentConfig
project: default
source:
path: basic/base
repoURL: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
targetRevision: main
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
Open the next link to see the the application deployed using Argo CD:
open ${ARGO_SERVER}/applications?search=basic-app
Next link takes you to the topology view of project argo-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/argo-${DEV_USERNAME}?view=graph
Show that, again, S2I takes care of building the image so that you don’t have to care about it.
At this stage Argo CD will deploy the application automatically using an ApplicationSet object which will obtain plain descriptors at
kitchensink-conf/basic/base
and deploy in two namespaces at the same time.
cat <<EOF | oc apply -n openshift-gitops -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: kitchensink-basic-${DEV_USERNAME}
namespace: openshift-gitops
labels:
argocd-root-app: "true"
username: ${DEV_USERNAME}
spec:
generators:
- list:
elements:
- env: appset-a-${DEV_USERNAME}
desc: "ApplicationSet A"
- env: appset-b-${DEV_USERNAME}
desc: "ApplicationSet B"
template:
metadata:
name: kitchensink-basic-app-{{ env }}
namespace: openshift-gitops
labels:
kitchensink-root-app: "true"
username: ${DEV_USERNAME}
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: '{{ env }}'
name: in-cluster
ignoreDifferences:
- group: apps.openshift.io
kind: DeploymentConfig
jqPathExpressions:
- .spec.template.spec.containers[].image
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
source:
path: basic/base
repoURL: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
targetRevision: main
EOF
Open the next link to see the the application deployed using Argo CD:
open ${ARGO_SERVER}/applications?search=basic-app-appset
Next links takes you to the topology view of project appset-a-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/appset-a-${DEV_USERNAME}?view=graph
And appset-b-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/appset-b-${DEV_USERNAME}?view=graph
This stage is just to show that you could deploy plain descriptors from different folders in different namespaces.
At this stage Argo CD will deploy the application automatically using an ApplicationSet object which will use kustomize to obtain descriptors from
kitchensink-conf/kustomize/{{ env }}
whereenv
is dev and test to deploy in two namespaces at the same time.
cat <<EOF | oc apply -n openshift-gitops -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: kitchensink-kustomize-${DEV_USERNAME}
namespace: openshift-gitops
labels:
argocd-root-app: "true"
username: ${DEV_USERNAME}
spec:
generators:
- list:
elements:
- env: dev
ns: kustomize-dev-${DEV_USERNAME}
desc: "Kustomize Dev"
- env: test
ns: kustomize-test-${DEV_USERNAME}
desc: "Kustomize Test"
template:
metadata:
name: kitchensink-kustomize-app-{{ env }}-${DEV_USERNAME}
namespace: openshift-gitops
labels:
kitchensink-root-app: "true"
username: ${DEV_USERNAME}
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: '{{ ns }}'
name: in-cluster
ignoreDifferences:
- group: apps.openshift.io
kind: DeploymentConfig
jqPathExpressions:
- .spec.template.spec.containers[].image
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
source:
path: kustomize/{{ env }}
repoURL: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
targetRevision: main
EOF
Open the next link to see the the application deployed using Argo CD:
open ${ARGO_SERVER}/applications?search=kustomize
Next links takes you to the topology view of project kustomize-dev-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/kustomize-dev-${DEV_USERNAME}?view=graph
And kustomize-test-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/kustomize-test-${DEV_USERNAME}?view=graph
This stage shows that you could deploy descriptors from different kustomize overlays in different namespaces using an ApplicationSet and the kustomize plugin.
At this stage Argo CD will deploy the application automatically using an Application object which will use helm to obtain descriptors from
kitchensink-conf/advanced/helm_base
to deploy in namespacehelm-${DEV_USERNAME}
.
This time the descriptor to deploy our application is a Deployment object instead of a DeploymentConfig
cat <<EOF | oc apply -n openshift-gitops -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kitchensink-helm-app-${DEV_USERNAME}
namespace: openshift-gitops
finalizers:
- resources-finalizer.argocd.argoproj.io
labels:
kitchensink-root-app: 'true'
username: ${DEV_USERNAME}
spec:
destination:
name: in-cluster
namespace: helm-${DEV_USERNAME}
ignoreDifferences:
- group: apps
jqPathExpressions:
- '.spec.template.spec.containers[].image'
kind: Deployment
project: default
source:
helm:
parameters:
- name: debug
value: 'true'
- name: baseNamespace
value: 'helm-${DEV_USERNAME}'
path: advanced/helm_base
repoURL: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
targetRevision: main
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
Open the next link to see the the application deployed using Argo CD:
open ${ARGO_SERVER}/applications?search=helm
Next links takes you to the topology view of project helm-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/helm-${DEV_USERNAME}?view=graph
This stage shows that you could deploy descriptors using the helm plugin using an Application object.
At this stage Argo CD will deploy the CICD pipelines automatically using an ApplicationSet object which will use helm to obtain descriptors from
kitchensink-conf/cicd
to deploy in namespacecicd-tekton-${DEV_USERNAME}
.
cat <<EOF | oc apply -n openshift-gitops -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: kitchensink-cicd-${DEV_USERNAME}
namespace: openshift-gitops
labels:
kitchensink-cicd-appset: "true"
spec:
generators:
- list:
elements:
- cluster: in-cluster
ns: "cicd-tekton-${DEV_USERNAME}"
template:
metadata:
name: kitchensink-cicd-${DEV_USERNAME}
namespace: openshift-gitops
labels:
kitchensink-cicd-app: "true"
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: '{{ ns }}'
name: '{{ cluster }}'
project: default
syncPolicy:
automated:
selfHeal: true
source:
helm:
parameters:
- name: kitchensinkRepoUrl
value: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink"
- name: kitchensinkRevision
value: "main"
- name: kitchensinkConfRepoUrl
value: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
- name: kitchensinkConfRevision
value: "main"
- name: username
value: "${DEV_USERNAME}"
- name: gitSslVerify
value: "true"
- name: cicdNamespace
value: "cicd-tekton-${DEV_USERNAME}"
- name: overlayDevNamespace
value: "helm-kustomize-dev-${DEV_USERNAME}"
- name: overlayTestNamespace
value: "helm-kustomize-test-${DEV_USERNAME}"
# - name: containerRegistryServer
# value: myregistry-quay-quay-system.apps.cluster-7mggs.7mggs.sandbox952.opentlc.com
# - name: containerRegistryOrg
# value: ${DEV_USERNAME}
path: cicd
repoURL: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
targetRevision: main
EOF
Open the next link to see the the application deployed using Argo CD:
open ${ARGO_SERVER}/applications?search=cicd
Next links takes you to the pipelines view of project cicd-tekton-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/dev-pipelines/ns/cicd-tekton-${DEV_USERNAME}
GIT_PAT=$(curl -k -s -XPOST -H "Content-Type: application/json" \
-d '{"name":"cicd'"${RANDOM}"'","scopes": ["repo"]}' \
-u ${DEV_USERNAME}:openshift \
${GIT_SERVER}/api/v1/users/${DEV_USERNAME}/tokens | jq -r .sha1)
echo "GIT_PAT=${GIT_PAT}"
cat <<EOF | oc apply -n cicd-tekton-${DEV_USERNAME} -f -
apiVersion: v1
kind: Secret
metadata:
name: git-pat-secret
namespace: cicd-tekton-${DEV_USERNAME}
type: kubernetes.io/basic-auth
stringData:
user.name: ${DEV_USERNAME}
user.email: "${DEV_USERNAME}@example.com"
username: ${DEV_USERNAME}
password: ${GIT_PAT}
EOF
Annotate the git secret so that tekton can use it when cloning or pushing changes.
oc annotate -n cicd-tekton-${DEV_USERNAME} secret git-pat-secret \
"tekton.dev/git-0=${GIT_SERVER}"
We need a webhook to trigger the CI pipeline when changes are made to the code and another one to trigger the CD pipeline when Pull Requests are merged and closed.
KITCHENSINK_CI_EL_LISTENER_HOST=$(oc get route/el-kitchensink-ci-pl-push-gitea-listener -n cicd-tekton-${DEV_USERNAME} -o jsonpath='{.status.ingress[0].host}')
curl -k -X 'POST' "${GIT_SERVER}/api/v1/repos/${DEV_USERNAME}/kitchensink/hooks" \
-H "accept: application/json" \
-H "Authorization: token ${GIT_PAT}" \
-H "Content-Type: application/json" \
-d '{
"active": true,
"branch_filter": "*",
"config": {
"content_type": "json",
"url": "http://'"${KITCHENSINK_CI_EL_LISTENER_HOST}"'"
},
"events": [
"push"
],
"type": "gitea"
}'
KITCHENSINK_CD_EL_LISTENER_HOST=$(oc get route/el-kitchensink-cd-pl-pr-gitea-listener -n cicd-tekton-${DEV_USERNAME} -o jsonpath='{.status.ingress[0].host}')
curl -k -X 'POST' "${GIT_SERVER}/api/v1/repos/${DEV_USERNAME}/kitchensink-conf/hooks" \
-H "accept: application/json" \
-H "Authorization: token ${GIT_PAT}" \
-H "Content-Type: application/json" \
-d '{
"active": true,
"branch_filter": "*",
"config": {
"content_type": "json",
"url": "http://'"${KITCHENSINK_CD_EL_LISTENER_HOST}"'"
},
"events": [
"pull_request"
],
"type": "gitea"
}'
Expect outputs like this:
{"id":2,"type":"gitea","config":{"content_type":"json","url":"http://el-kitchensink-cd-pl-pr-gitea-listener-cicd-tekton-user1.apps.example.com"},"events":["pull_request","pull_request_assign","pull_request_label","pull_request_milestone","pull_request_comment","pull_request_review_approved","pull_request_review_rejected","pull_request_review_comment","pull_request_sync"],"authorization_header":"","active":true,"updated_at":"2023-07-27T07:18:06Z","created_at":"2023-07-27T07:18:06Z"}
Stage 7: Launch Kitchensink on JBOSS EAP 7.2 with ArgoCD using helm + kustomize to deploy in two overlays
At this stage Argo CD will deploy the application automatically using an ApplicationSet object which will use a custom plugin helm-kustomized to obtain descriptors from
kitchensink-conf/kustomize/{{ env }}
whereenv
is dev and test to deploy in two namespaces at the same time.
cat <<EOF | oc apply -n openshift-gitops -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: kitchensink-kustomized-helm-${DEV_USERNAME}
namespace: openshift-gitops
labels:
argocd-root-app: "true"
username: ${DEV_USERNAME}
spec:
generators:
- list:
elements:
- env: dev
ns: helm-kustomize-dev-${DEV_USERNAME}
desc: "Helm + Kustomize (Dev)"
- env: test
ns: helm-kustomize-test-${DEV_USERNAME}
desc: "Helm + Kustomize (Test)"
template:
metadata:
name: kitchensink-kustomized-helm-app-{{ env }}-${DEV_USERNAME}
namespace: openshift-gitops
labels:
kitchensink-root-app: "true"
username: ${DEV_USERNAME}
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: '{{ ns }}'
name: in-cluster
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
source:
path: advanced/overlays/{{ env }}
repoURL: "${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf"
targetRevision: main
plugin:
env:
- name: DEBUG
value: 'false'
- name: BASE_NAMESPACE
value: 'cicd-tekton-${DEV_USERNAME}'
name: kustomized-helm
EOF
Open the next link to see the the application deployed using Argo CD:
open ${ARGO_SERVER}/applications?search=kustomized-helm
Next links takes you to the pipelines view of project helm-kustomize-dev-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/helm-kustomize-dev-${DEV_USERNAME}?view=graph
Next links takes you to the pipelines view of project helm-kustomize-test-${DEV_USERNAME}:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/topology/ns/helm-kustomize-test-${DEV_USERNAME}?view=graph
Make a change to src/main/webapp/index.xhtml
, for instance:
<div>
<p>You have successfully deployed the JBoss Application in OpenShift !!!</p>
</div>
Do it using this link.
open ${GIT_SERVER}/${DEV_USERNAME}/kitchensink/_edit/main/src/main/webapp/index.xhtml
Watch the pipeline progress:
open https://console-openshift-console.apps.${CLUSTER_DOMAIN}/dev-pipelines/ns/tekton-cicd-${DEV_USERNAME}
Then go to the configuration repository kitchensink-conf
and look for Pull Requests
, approve it so that the CD pipelines triggers and deploys the new image on dev
.
Finally do the same to deploy on test
.
Since you started this guide you’ve been deploying Kitchensink on JBoss EAP 7.2, in different ways and namespaces, always using S2I to build the image starting from the source code of the Jakarta EE application.
One of the reasons we think S2I is awesome is that this framework should help you whenever you upgrade JBoss itself. The idea is that because you’re using S2I extensions mechanism which rely on JBoss scripts of S2I helper scripts, you don’t care about the underneath standalone.xml
file… almost. In this lab you will upgrade JBoss from 7.2 to 7.4, let’s deal with this almost.
Well there are a number of things different in 7.4 with regards to 7.2 but we only care about one for this lab. The PostgreSQL driver definition is missing.
Let’s upgrade the builder image from 7.2 to 7.4 BUT only helm-kustomize-dev and helm-kustomize-test for simplicity.
There is one location where we have to upgrade the builder image:
⇒ kitchensink-conf/cicd/values.yaml
Finally, the following link will take you to file kitchensink-conf/cicd/values.yaml
where you have to do the following change.
open ${GIT_SERVER}/${DEV_USERNAME}/kitchensink-conf/_edit/main/cicd/values.yaml#L21
Once there, you have to change this:
kitchensinkBuilderImage: jboss-eap72-openshift:1.2
With this
kitchensinkBuilderImage: jboss-eap74-openjdk8-openshift:7.4.0
Once you have made the changes scroll down and click on Commit Changes
.
Now let’s force the refresh of all our applications at once before we make the last change which will trigger the CI pipeline. To do that you have to go to Argo CD and click on REFRESH APPS
then click on ALL
as in the next picture.
Use this link to open Argo CD and see all applications then proceed as explained.
Tip
|
The following link has a query parameter search which will show only the applications of ${DEV_USERNAME}.
|
Please copy and paste the following link and open it in a browser. It will take you to file kitchensink/extensions/install.sh
where you have to uncomment the line where the script configures the PostgreSQL driver.
open ${GIT_SERVER}/${DEV_USERNAME}/kitchensink/_edit/main/extensions/install.sh#L12
Once there, you have to change this:
# configure_drivers ${injected_dir}/driver-postgresql.env
With this
configure_drivers ${injected_dir}/driver-postgresql.env
Once you have made the changes scroll down and click on Commit Changes
.
open ${ARGO_SERVER}/applications?search=${DEV_USERNAME}
Remember, you have committed and push a change in the CODE of kitchensink, because install.sh
is CODE only that is code to adapt the image to your needs, but CODE!
As with all changes to code there is a webhook that will trigger the CI pipeline and also remember that the last action in that pipeline generates a Pull Request, it doesn’t deploy the new image yet!
You have made changes in install.sh which should have trigger the CI pipeline you tested before.
Warning
|
Before approving the Pull Request in kitchensink-conf |
Let’s check the version of JBoss !!!
oc logs deployment/kitchensink -n helm-kustomize-dev-${DEV_USERNAME} | grep -e "JBoss EAP 7"
You should expect something like:
17:51:03,635 INFO [org.jboss.as] (MSC service thread 1-1) WFLYSRV0049: JBoss EAP 7.2.9.GA (WildFly Core 6.0.30.Final-redhat-00001) starting
17:51:09,121 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.2.9.GA (WildFly Core 6.0.30.Final-redhat-00001) started in 5894ms - Started 65 of 86 services (30 services are lazy, passive or on-demand)
17:51:09,667 INFO [org.jboss.as] (MSC service thread 1-1) WFLYSRV0050: JBoss EAP 7.2.9.GA (WildFly Core 6.0.30.Final-redhat-00001) stopped in 37ms
17:51:11,987 INFO [org.jboss.as] (MSC service thread 1-1) WFLYSRV0049: JBoss EAP 7.2.9.GA (WildFly Core 6.0.30.Final-redhat-00001) starting
17:51:48,114 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.2.9.GA (WildFly Core 6.0.30.Final-redhat-00001) started in 37487ms - Started 581 of 824 services (481 services are lazy, passive or on-demand)
Now you have to do the same, you know, go to kitchensink-conf, look for new Pull Requests, approve it and wait until the new image has been deployed in dev.
Once the new image has been rolled out, you could open the link to the UI, as you have done before… and then check in the logs if the new version is 7.4.*. You can do it with the next command.
oc logs deployment/kitchensink -n helm-kustomize-dev-${DEV_USERNAME} | grep -e "JBoss EAP 7"
You should expect something like:
8:13:17,374 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) starting
18:13:19,474 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) started in 3238ms - Started 75 of 99 services (38 services are lazy, passive or on-demand)
18:13:22,327 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) starting
18:13:24,585 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) started in 2480ms - Started 59 of 90 services (38 services are lazy, passive or on-demand)
18:13:24,930 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0050: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) stopped in 25ms
18:13:27,534 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0050: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) stopped in 36ms
18:13:27,535 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) starting
18:13:38,480 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) started in 10944ms - Started 595 of 869 services (525 services are lazy, passive or on-demand)
Then do the same with the new Pull Request and wait until the new image has been deployed in test.
POD_NAME=$(oc get pod -n helm-kustomize-test-${DEV_USERNAME} --field-selector=status.phase==Running -o jsonpath='{.items[0].metadata.name}')
oc logs ${POD_NAME} -n helm-kustomize-test-${DEV_USERNAME} | grep -e "JBoss EAP 7"