Skip to content

A collection of Kafka Streams services that use data generated by arcade.redhat.com/shipwars

Notifications You must be signed in to change notification settings

evanshortiss/shipwars-streams

Repository files navigation

Shipwars Streams

Java application that uses Quarkus and Apache Kafka Streams to analyse events sent to Kafka by the Shipwars Game Server.

Streams Topologies

  • shot-stream-enricher - Joins shot event data with the associated player data. This enriched topic is used by the shot-distribution-aggregator.
  • match-aggregator - Creates an aggregate record for each match. This facilitates match analysis and replay.
  • shot-distribution-aggregator - Aggregates the distribution of shots for given game generations (each game server deployment is a new generation).


Architecture of the overall application and Quarkus Kafka Streams Topology.

The match-aggregator exposes the following endpoint:

  • GET /replays - Returns an array of replays. A count query parameter can be provided to limit the returned results - it defaults to 12.

The shot-distribution-aggregator exposes the following endpoints:

  • GET /shot-distribution - Returns the shot distribution for all game generations in JSON format.
  • GET /shot-distribution/stream - Opens a HTTP Server-Sent Events stream that sends each enriched shot to the HTTP client.

The shot distribution JSON response contains top-level game generation keys, and within these are the hit/miss counts for each cell and each player type.

{
  "a-unique-game-uuid": {
    "0,0": {
      "ai_hit": 6,
      "ai_miss": 2,
      "human_hit": 3,
      "human_miss": 5
    },
     "0,1": {
      "ai_hit": 1,
      "ai_miss": 6,
      "human_hit": 2,
      "human_miss": 5
    }
    // Data for every cell on the 5x5 grid, i.e up to key "4,4"...
  },
  "another-unique-game-uuid": { /* Data for all cells in that game */ }
}

Use with OpenShift Streams for Apache Kafka

Each module is configured to use SASL SSL to connect to Kafka, since this is standard with OpenShift Streams for Apache Kafka.

# Get the bootstrap URL using the rhoas CLI. A client ID and secret can be
# obtained using the rhoas CLI or cloud.redhat.com UI
# Specify which application to run using the -f flag
KAFKA_BOOTSTRAP_SERVERS=$(rhoas kafka describe | jq .bootstrapServerHost -r) \
KAFKA_CLIENT_ID="replace-me" \
KAFKA_CLIENT_SECRET="replace-me" \
./mvnw quarkus:dev -f shot-distribution-aggregator/pom.xml

Running Locally

Refer to the Docker/Podman guides in Shipwars Deployment. For example to run Shipwars services using Docker:

KAFKACONNECTION_BOOTSTRAPSERVERS=$(rhoas kafka describe | jq .bootstrapServerHost -r) \
KAFKACONNECTION_SSL=true \
KAFKACONNECTION_USER="replace-me" \
KAFKACONNECTION_PASSWORD="replace-me" \
docker-compose up --force-recreate --build

Next, start each Kafka Streams module by targeting the necessary pom.xml:

# Get the bootstrap URL using the rhoas CLI. A client ID and secret can be
# obtained using the rhoas CLI or cloud.redhat.com UI
KAFKA_BOOTSTRAP_SERVERS=$(rhoas kafka describe | jq .bootstrapServerHost -r) \
KAFKA_CLIENT_ID="replace-me" \
KAFKA_CLIENT_SECRET="replace-me" \
./mvnw quarkus:dev -f shot-distribution-aggregator/pom.xml

Note: The shot-distribution-aggregator relies on the shot-stream-enricher for data, so you need to run both.

Building

JAR files

This will build both application module JARs.

mvn clean install

Docker Images

Each module contains a script folder. Run the build.sh script from the root of the module you'd like to build:

# These will use defaults if not provided
IMAGE_TAG=latest \
IMAGE_REPOSITORY=quay.io/yourusername/name-of-image \
./scripts/build.sh

To push the image use the push.sh:

# These will use defaults if not provided
IMAGE_TAG=latest \
IMAGE_REPOSITORY=quay.io/yourusername/name-of-image \
./scripts/push.sh

Running for Development

Follow the guide above to run the associated services using Docker/Podman, but remove this service from the Dockerfile, i.e start all of the services using docker-compose up except this one.

Once the other services have started, you can start this one using:

QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS=localhost:9094 \
KAFKA_BOOTSTRAP_SERVERS=localhost:9094 \
QUARKUS_HTTP_PORT=8585 \
./mvnw quarkus:dev -f shot-distribution-aggregator/pom.xml

Scaling

Kafka Streams pipelines can be scaled out, i.e. the load can be distributed amongst multiple application instances running the same pipeline.

This particular example has not been designed to support this functionality.

About

A collection of Kafka Streams services that use data generated by arcade.redhat.com/shipwars

Topics

Resources

Stars

Watchers

Forks