-
Notifications
You must be signed in to change notification settings - Fork 9
NSP Target Registry chained server #258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
* New target registry server pattern allowing future feature * Current chain: watch handler -> store (sqlite + keepalive)
return nil, errors.Wrap(err, "Unable create keepalive registry") | ||
} | ||
return nsp.NewServer( | ||
nspRegistry.NewServer(keepAliveRegistry), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separation of nspRegistry and watchhandler seems forced to me. They both rely on the same keepAliveRegistry.
What does "NSP multi-replicas support" stand for? |
|
||
// Watch will call the Watch function of the next chain element | ||
// If the next element is nil, then nil will be returned. | ||
func (ntrsi *NextTargetRegistryServerImpl) Watch(t *nspAPI.Target, watcher nspAPI.TargetRegistry_WatchServer) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably this is similar to NSM's Monitor thingy, but it seems only one chain entry is allowed to implement a "real" Watch function as it needs to block forever. This is probably not an issue, but easy to make mistakes if the user is not aware :)
(Also, I read somewhere that grpc streams are not thread-safe. So, for example let's say the backend i.e the db was partitioned. Then letting loose multiple go routines wouldn't work properly without synchronization. And probably it would require a special tail chain entry that would wait for the watch routines to keep the stream open.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, only one chain element would be allowed to implement the real watch function.
For the second part, do you mean we should have 1 goroutine to handle all watcher?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just meant that special care should be taken in that case, and the default chaining couldn't be used as is.
It means we could run multiple NSP instances at the same time. For instance, we could have a leader election system, the non-leaders instances will then redirect their call towards the leader one. I think separating the server into chain elements will simplify the development of this feature. |
I see. Not sure about the benefit of sg like that if non-leaders would merely act as proxies and we still had a persistent storage back-end. Anyways, the chaining approach could be added irrespective of any future modifications. |
Description
note: if this PR get accepted I will implement the same pattern for other APIs (TAPA, IPAM, NSP Configuration manager)
Issue link
/
Checklist