Skip to content
This repository was archived by the owner on Jul 18, 2024. It is now read-only.

Commit 0478e00

Browse files
committed
Merge upstream work into my fork, override any local changes with upstream.
Merge commit 'a133801e7d07ffda1674101302d98166f4ce7224' # Conflicts: # scripts/kubernetes/nginx-prd.yaml # scripts/kubernetes/php-cli.yaml # scripts/kubernetes/php-fpm-prd.yaml
2 parents a676633 + a133801 commit 0478e00

40 files changed

+957
-389
lines changed

README.md

+21-6
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,32 @@
11
# NGINX and PHP-FPM Container Cluster on IBM Bluemix
2-
Demonstration of a set of NGINX and PHP containers deployed to the IBM Bluemix Container Service to support multiple Drupal 8 sites. These containers mount a persistent volume for sites (which change after build time) and connect to MySQL, Redis, and Memcached services from the Bluemix catalog (not self-hosted containers inside the same cluster).
2+
This project demonstrates how to deploy a Drupal 8 environment on a cluster of NGINX and PHP containers using the IBM Container Service and several Bluemix catalog services.
33

4-
This shows several basic concepts for deploying a multi-container deployment of NGINX and PHP cluster to Kubernetes and exposing them as services. More complex approaches might use Helm or more sophisticated build and deploy approaches that deploy on commit to a GitHub repo.
4+
These containers mount a persistent volume for sites (which change after build time) and connect to MySQL, Redis, and Memcached services from the Bluemix catalog (not self-hosted containers inside the same cluster).
5+
6+
After deployment, Drupal site developers can manage the lifecycle of sites by delivering configuration or code changes to specific folders in this repository. Commits trigger fresh rebuild and deploys in an IBM DevOps Services continuous integration pipeline.
7+
8+
# Features of the IBM Cloud platform highlighted
9+
- A secure, high-performance, IBM Container Service cluster (based on Kubernetes) with advanced network and storage configuration options.
10+
- Integration with managed MySQL, Redis, and Memcached Databases-as-a-service provided through the Bluemix service catalog.
11+
- Multiple levels of security for Docker images stored in the IBM Container Registry, including automatic scanning by the IBM Vulnerability Advisor.
12+
- Automatic build and deploy workflows with IBM DevOps Services.
13+
14+
# Logical overview diagram
15+
There are two separate Drupal installations that are deployed onto the container cluster. One to represent a "staging" environment and one to represent a "production" environment. Each has its own dedicated services and volume mounts. A CLI container can run `drush` or scripts such as `transfer-files.sh` and `transfer-data.sh` on those environments to synchronize them.
516

6-
The PHP-FPM containers also include a built in Drupal 8.3 package, and mount the volume for shared read/write access to the `/var/www/html/sites/default/files` directory.
717

818
![](docs/img/architecture.png)
919

10-
# One time Container Service and Bluemix services setup
20+
# Setup the proof of concept
21+
22+
## One time Container Service and Bluemix services setup
1123
See the Container Service Kubernetes and Bluemix services (MySQL, Redis, Memcached) [configuration instructions](docs/INITIAL-SETUP.md).
1224

13-
# Building and deploying the first set of containers
25+
## Building and deploying the first set of containers
1426
See the Docker container build and Kubernetes deployment [instructions](docs/DEPLOY-CONTAINERS.md).
1527

16-
# Ongoing development and operations with GitHub commits
28+
## Ongoing development and operations with GitHub commits
1729
See the ongoing development [instructions](docs/ONGOING-DEVELOPMENT.md). And the work in progress DevOps [pipeline docs](docs/PIPELINE-SETUP.md).
30+
31+
## Synchronizing data from production back to staging
32+
There are two synchronization scripts that can be invoked to bring user generated changes to files or data from production back into the staging environment.

code/README.md

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Code changes
2+
This directory contains base `code` files. If anything changes in this directory it will trigger a `code` Docker image rebuild and Kubernetes rolling deploy through the pipeline.

code/drush/drush-status.sh

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash
2+
3+
# Connect to the local Drupal environment and run a Drush commands
4+
echo "Running user login on the PHP-FPM container."
5+
cd /root/drush/sites/default/
6+
drush --version
7+
8+
echo "Dumping environment."
9+
drush status
10+
11+
echo "Executing user login command."
12+
drush user-login

code/drush/transfer-data.sh

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash
2+
3+
CURRENT_DATE=`/bin/date "+%m-%d-%y"`
4+
5+
# Dump production data
6+
echo "Extracting production data from the ${MYSQL_NAME_PRD} database on ${MYSQL_HOST_PRD}."
7+
mysqldump --verbose --add-drop-table --quote-names -u${MYSQL_USER_PRD} -p${MYSQL_PASS_PRD} -h${MYSQL_HOST_PRD} ${MYSQL_NAME_PRD} > /root/backups/production-backup.sql
8+
tar -zcvf /root/backups/backup-${CURRENT_DATE}.tar.gz /root/backups/production-backup.sql
9+
10+
# Restore data into staging
11+
echo "Restoring production data to staging database ${MYSQL_NAME_STG} on ${MYSQL_HOST_STG}."
12+
mysql --verbose -u${MYSQL_USER_STG} -p${MYSQL_PASS_STG} -h${MYSQL_HOST_STG} ${MYSQL_NAME_STG} < /root/backups/production-backup.sql

code/drush/transfer-files.sh

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
#!/bin/bash
2+
3+
PRD_PERSISTENT_VOLUME=/var/www/drupal/web/sites/default/files-prd
4+
STG_PERSISTENT_VOLUME=/var/www/drupal/web/sites/default/files-stg
5+
6+
# Copy production user generated files back to staging
7+
echo "Synchronizing data from production ${PRD_PERSISTENT_VOLUME} to ${STG_PERSISTENT_VOLUME}."
8+
rsync -av ${PRD_PERSISTENT_VOLUME}/ ${STG_PERSISTENT_VOLUME}

config/README.md

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Configuration declarations
2+
This directory contains base `config` image versions. If anything changes in this directory it will trigger both a `config` and `code` Docker image rebuild and Kubernetes rolling deploy through the pipeline.

docs/DEMO.md

+50
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# Drupal sites on the IBM Container Service
2+
The shift towards cloud-native deployment models allows developers to package their applications consistently between staging and production, and helps them scale their deployments horizontally across a large pool of compute resources more quickly.
3+
4+
In contrast to other cloud-native approaches - like Platform-as-a-Service with Heroku or Cloud Foundry - container orchestration system like Kubernetes are "less-opinionated" and offer a great amount of flexibility at the cost of a single prescribed set of guidelines, which can make them more attractive to those migrating from a virtual machine or bare metal approach rather than towards PaaS directly.
5+
6+
This PoC shows how one might migrate a traditional web-server, application-server, and database-server based application into a container-based model that depends on cloud services in order to speed application development by reducing time spent on managing servers across a large deployment target environment.
7+
8+
## 1. Functional Drupal site running on the IBM Cloud
9+
A managed Kubernetes cluster from the IBM Cloud Container Service provides the fabric on which to install a set of NGINX and PHP-FPM containers that run Drupal.
10+
11+
These containers package custom site code and the underlying Kubernetes fabric can bind those containers to data services, load balancers, and storage volumes provided by the IBM Cloud.
12+
13+
### 1.1 Initial environment setup
14+
The [initial setup](INITIAL-SETUP.md) instructions show how to set up a Kubernetes cluster and provision the MySQL, Redis, and Memcached services needed by the Drupal cluster.
15+
16+
### 1.2 First container cluster deployment
17+
Once the fabric and services are configured, you can build container images and deploy them to the IBM Container Service manually or through an automated pipeline. The [container deployment instructions](DEPLOY-CONTAINERS.md) describe how.
18+
19+
### 1.3 Complete Drupal configuration
20+
Connecting to Drupal to finish installation. Once all of the containers have gotten to the `Running` state (you can see status with 'kubectl get pods') you can find the public IP of the Drupal cluster with `kubectl get services`. Because the `settings.php` file has been set up to get the MySQL connection information from the Kubernetes environment, it's a shorter process than normal.
21+
22+
## 2. Clearly defined and easy to implement process for pushing code updates
23+
Once the initial environment is set up, you can initiate additional build, test, and deploy workflows by committing code to specific folders in this repository. This simulates GitHub or BitBucket web hooks.
24+
25+
### 2.1 Updating the underlying NGINX and PHP container images
26+
You can commit updated NGINX or PHP version files to the `config` directory. This will in turn trigger base image rebuilds in the DevOps pipeline, and in turn rebuild custom Drupal-based `code` images on top and deploy them.
27+
28+
### 2.2 Updating custom code and triggering code layer rebuilds
29+
You can commit code that should be layered on top of base NGINX, PHP, and Drupal installation by changing code in the `code` directory. This will trigger a custom code rebuild and deploy.
30+
31+
## 3. Synchronize or migrate one database to another database
32+
Ongoing management of the Drupal cluster can be performed with arbitrary shell commands and `drush` commands invoked by logging into the PHP-CLI container. This container could also be extended to run arbitrary commands on startup through the DevOps pipeline.
33+
34+
### 3.1. Using the PHP CLI container to execute arbitrary commands
35+
You can exec into the PHP CLI container to [run arbitrary bash or MySQL commands](PHP-CLI-DRUSH.md).
36+
37+
### 3.2. Using the PHP CLI container to execute migration commands
38+
You can exec into the PHP CLI container to [run the `transfer-data.sh` and `transfer-files.sh` scripts injected from the `code/drush` directory](PHP-CLI-DRUSH.md). For example: `kubectl exec ${PHP_CLI_CONTAINER_NAME} /root/drush/transfer-files.sh` and `kubectl exec ${PHP_CLI_CONTAINER_NAME} /root/drush/transfer-data.sh`
39+
40+
### 3.3. Using a PHP FPM container to execute `drush` commands
41+
You can exec into the PHP FPM container to [run the `drush-status.sh` script injected from the `code/drush` directory](PHP-CLI-DRUSH.md). For example: `kubectl exec ${PHP_FPM_CONTAINER_NAME} /var/www/drupal/drush/drush-status.sh`
42+
43+
## 4. Taking advantage of a continous integration pipeline
44+
The [pipeline setup instructions](PIPELINE-SETUP.md) show how IBM DevOps can be used with user-defined scripts and webhooks to initiate build, test, and deployment flows. These can incorporate unit test scripts, security vulnerability assessments, and blue/green rolling deploys. These workflows can reuse build tool Docker images as well, which is a new feature of IBM DevOps services.
45+
46+
### 4.1. Checking in configuration or code updates
47+
Once configured, the pipeline will detect changes to the top level `config` and `code` directories and trigger new build and deploy processes depending on the change.
48+
49+
### 4.2 Synchronizing data from production to staging
50+
You can also use the pipeline UI to execute data and file synchronization. You can also set up arbitrary script execution by extending this model.

0 commit comments

Comments
 (0)