-
Notifications
You must be signed in to change notification settings - Fork 0
Conversation
const value = result.value | ||
|
||
// Add some helper props |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha good call, I believe some the first ffc repos were copied their config from some of the flood services which had these comments in. I think we've copying and pasting this file continuously for three years 😆
app/processing/index.js
Outdated
|
||
const start = async () => { | ||
try { | ||
await batchSchedule() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this name should be more descriptive as batchSchedule
I had to read through the function to work out what is was going to do. What about something like schedulePendingSettlements
or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed only did it to match the process map model -- will update and update the model diag too
const validateSchedule = require('./validate-schedule') | ||
const updateScheduledSettlementByScheduleId = require('./update-scheduled-settlement-by-schedule-id') | ||
|
||
const batchSchedule = async (started = new Date()) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
started
could just be declared as a const
in the function as no argument is passed in.
I think other repos have something like this to help make testing easier but I think using Jest to mock the system time will allow us to keep the code simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
THANK YOU!
Added a lot of extra test cases for little to no benefit
const config = require('../../config').processingConfig | ||
|
||
const getScheduledSettlements = async (started, transaction) => { | ||
return db.schedule.findAll({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without lock
and skipLocked
rows this does mean multiple replicas of this service will pick up the same settlements.
Appreciate that we agreed to only have one replica for first release, but I think it would still be wise to design for concurrency where possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aaahh gotcha, gotcha
Does this need some sorta 'limit to 1st x amount of rows' w/in the findAll
if sucha param exists?
Just some thoughts on the replica sitch:
Does replica A pick up all schedules (say idk 2k+) which causes its memory/CPU to spike so K8s brings up another replica, so replica B looks to pick up all in that table but at this point (just given the domain of this) is there likely to be any/many new records tht won't have been picked up by replica A? I guess this is v sensitive to our configured Helm values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes good point. You can set that with limit: 100
for example then it will only lock 100 rows. Then next replica will skip those locked rows and pick up the next 100.
You're correct, it depends on what we setup in Helm on what the behaviour will be. So I'll describe a few of those options.
The actual value needs some thought (and testing) as we want to maximise throughput and the answer is subjective to each service. For example is it more performant to keep WIP low for each replica but have a greater number of replicas? Or is it better to have less replicas with greater WIP. Greater WIP may mean those pods to have more CPU/Memory allocated to them.
Set number of replicas
> 1
In the helm chart you can define how many instances of the service should be running independent of autoscaling. So if we set that value to 4
there will always be four pods running ready to process.
Use horizontal pod autoscaling
We declare a threshold of usage for CPU and/or memory usage of a pod and if that capacity is exceeded then another replica is created up to the maximum number of replicas we permit in the helm chart.
That threshold can either be a fixed CPU/memory usage or a percentage of usage against the limit. The latter is what we tend to do.
It's a bit of an art to get the balance right of how much memory and CPU we allocate to a pod and at what point we consider it under high enough load to consider scaling. We'd need to identify what idle usage is vs under strain usage is so we set the appropriate values.
Use non-standard metrics
With Kubernetes you can use custom metrics to control scaling beyond the default memory and CPU. There are also tools such as Keda that can help with this. For example, rather than look at pod utilisation, we could look at how many messages are waiting to be processed on a service bus subscription and use that to scale the service. If there are only a few messages then just keep replicas low and work through them. If we get a surge of thousands of messages then start rapidly creating more replicas to handle the load.
We do kind of get that anyway as lots of messages tends to result in CPU/memory spikes, but there are ways to make it more predictable.
const validScheduledSettlements = [] | ||
for (const scheduledSettlement of scheduledSettlements) { | ||
try { | ||
validScheduledSettlements.push(validateSchedule(scheduledSettlement)) | ||
} catch (err) { | ||
console.error(err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could this to broken down into a smaller function, ie getValidScheduledSettlements
or similar to simplify this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, on it
const updatedScheduledSettlements = [] | ||
for (const validScheduledSettlement of validScheduledSettlements) { | ||
try { | ||
await updateScheduledSettlementByScheduleId(validScheduledSettlement.scheduleId, started, transaction) | ||
updatedScheduledSettlements.push(validScheduledSettlement) | ||
} catch (err) { | ||
console.error(`Could not update saved for: ${validScheduledSettlement.scheduleId}, removing from schedule`) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above, could this be extracted into it's own function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, on it
@@ -1,6 +1,6 @@ | |||
{ | |||
"name": "ffc-pay-statement-constructor", | |||
"version": "0.7.0", | |||
"version": "0.9.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite a jump but assume there are other in flight PRs.
} | ||
|
||
await transaction.commit() | ||
return updatedScheduledSettlements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there value returning this if the calling function doesn't use it. Or is it a placeholder for the next ticket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah for next part of the process
Kudos, SonarCloud Quality Gate passed! |
https://eaflood.atlassian.net/browse/SFI-2317