Is there any way to limit the number of threads that run to perform provisioning operations on a Web Services connector?
We need to send add/remove entitlements operations for multiple accounts, but each account update must be executed sequentially.
This is necessary because the database service we are connecting to experiences deadlocks when executing multiple calls to stored procedures.
Maybe a support ticket to SailPoint? To my knowledge, it’s not something that can be tweaked / configured by the client / customer.
Failing that, look at any loadbalancer you may have in the environment, to place between in front of the DB. Set concurrent connection to 1, and allow request queuing. Nginx has something like this for the TCP stream. max_conns and backlog (queue size). …because ultimately, you ought to protect the data access layer, prevent it from deadlocking, regardless if it’s coming from ISC or not. Or implement the stored procedure in a more robust fashion. i.e. DBs shouldn’t trust all clients would behave orderly.
That is, what you described is a data service reliability / availability issue / weakness.
There is no option to configure how the provisioning operations are carried out, sequentially or not, and probably your best bet is to handle the error in BeforeOperation rule. ie if there is way to handle the load to DB Server