Skip to content
This repository has been archived by the owner on Feb 7, 2025. It is now read-only.

Setup Constructor Schedule #16

Merged
merged 21 commits into from
Sep 7, 2022
Merged

Conversation

marctemp
Copy link
Contributor

@marctemp marctemp marked this pull request as ready for review September 5, 2022 11:36
const value = result.value

// Add some helper props
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha good call, I believe some the first ffc repos were copied their config from some of the flood services which had these comments in. I think we've copying and pasting this file continuously for three years 😆


const start = async () => {
try {
await batchSchedule()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this name should be more descriptive as batchSchedule I had to read through the function to work out what is was going to do. What about something like schedulePendingSettlements or something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed only did it to match the process map model -- will update and update the model diag too

const validateSchedule = require('./validate-schedule')
const updateScheduledSettlementByScheduleId = require('./update-scheduled-settlement-by-schedule-id')

const batchSchedule = async (started = new Date()) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

started could just be declared as a const in the function as no argument is passed in.

I think other repos have something like this to help make testing easier but I think using Jest to mock the system time will allow us to keep the code simpler.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

THANK YOU!

Added a lot of extra test cases for little to no benefit

const config = require('../../config').processingConfig

const getScheduledSettlements = async (started, transaction) => {
return db.schedule.findAll({
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without lock and skipLocked rows this does mean multiple replicas of this service will pick up the same settlements.

Appreciate that we agreed to only have one replica for first release, but I think it would still be wise to design for concurrency where possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aaahh gotcha, gotcha

Does this need some sorta 'limit to 1st x amount of rows' w/in the findAll if sucha param exists?

Just some thoughts on the replica sitch:
Does replica A pick up all schedules (say idk 2k+) which causes its memory/CPU to spike so K8s brings up another replica, so replica B looks to pick up all in that table but at this point (just given the domain of this) is there likely to be any/many new records tht won't have been picked up by replica A? I guess this is v sensitive to our configured Helm values

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes good point. You can set that with limit: 100 for example then it will only lock 100 rows. Then next replica will skip those locked rows and pick up the next 100.

You're correct, it depends on what we setup in Helm on what the behaviour will be. So I'll describe a few of those options.

The actual value needs some thought (and testing) as we want to maximise throughput and the answer is subjective to each service. For example is it more performant to keep WIP low for each replica but have a greater number of replicas? Or is it better to have less replicas with greater WIP. Greater WIP may mean those pods to have more CPU/Memory allocated to them.

Set number of replicas > 1

In the helm chart you can define how many instances of the service should be running independent of autoscaling. So if we set that value to 4 there will always be four pods running ready to process.

Use horizontal pod autoscaling

We declare a threshold of usage for CPU and/or memory usage of a pod and if that capacity is exceeded then another replica is created up to the maximum number of replicas we permit in the helm chart.

That threshold can either be a fixed CPU/memory usage or a percentage of usage against the limit. The latter is what we tend to do.

It's a bit of an art to get the balance right of how much memory and CPU we allocate to a pod and at what point we consider it under high enough load to consider scaling. We'd need to identify what idle usage is vs under strain usage is so we set the appropriate values.

Use non-standard metrics

With Kubernetes you can use custom metrics to control scaling beyond the default memory and CPU. There are also tools such as Keda that can help with this. For example, rather than look at pod utilisation, we could look at how many messages are waiting to be processed on a service bus subscription and use that to scale the service. If there are only a few messages then just keep replicas low and work through them. If we get a surge of thousands of messages then start rapidly creating more replicas to handle the load.

We do kind of get that anyway as lots of messages tends to result in CPU/memory spikes, but there are ways to make it more predictable.

Comment on lines 12 to 19
const validScheduledSettlements = []
for (const scheduledSettlement of scheduledSettlements) {
try {
validScheduledSettlements.push(validateSchedule(scheduledSettlement))
} catch (err) {
console.error(err)
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could this to broken down into a smaller function, ie getValidScheduledSettlements or similar to simplify this function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, on it

Comment on lines 21 to 29
const updatedScheduledSettlements = []
for (const validScheduledSettlement of validScheduledSettlements) {
try {
await updateScheduledSettlementByScheduleId(validScheduledSettlement.scheduleId, started, transaction)
updatedScheduledSettlements.push(validScheduledSettlement)
} catch (err) {
console.error(`Could not update saved for: ${validScheduledSettlement.scheduleId}, removing from schedule`)
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above, could this be extracted into it's own function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, on it

@@ -1,6 +1,6 @@
{
"name": "ffc-pay-statement-constructor",
"version": "0.7.0",
"version": "0.9.0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite a jump but assume there are other in flight PRs.

}

await transaction.commit()
return updatedScheduledSettlements
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there value returning this if the calling function doesn't use it. Or is it a placeholder for the next ticket?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah for next part of the process

@sonarqubecloud
Copy link

sonarqubecloud bot commented Sep 7, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

96.3% 96.3% Coverage
0.0% 0.0% Duplication

@johnwatson484 johnwatson484 merged commit 79d41e3 into main Sep 7, 2022
@johnwatson484 johnwatson484 deleted the pay-2317-setup-constructor-schedule branch September 7, 2022 12:26
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants