Web-Service SaaS Connector and API Limit (Dayforce)

So I’ve been struggling this for awhile. This issue does not happen if the Web-Service is configured to go through the VA (Web-Service VA). However, recreating the same connector, same HTTP operations but on Web-Service SaaS has this issue.

My aggregation constantly comes back with error 429. Reading document says it should already be handled. https://developer.sailpoint.com/docs/connectivity/saas-connectivity/in-depth/handling-rate-limits

The API call is to Dayforce. They have a sliding window of 100 API / Call.

My HTTP operation includes:

1. Aggregating employee ID. It returns the employee ID in Data[*].
2. Using the above employee ID, it then iterates through each employee ID, pull down all their details.

On a good day, this can be completed within 6 minutes. On a bad day, it can take 20 minutes.

Things I’ve done to increase the success include:

- Specifying aggregationRetryErrors (429, Throttling)

  • maxRetryCount 30

  • cloudRetryInterval 70

  • retryWaitTime 1300000

  • apitimeout 7200

  • retryableErrors (429, Throttling, Quota, 100)

  • BeforeOperationRule in my 2nd step during aggregating employee detail, to take a pause for 1.5 seconds before each item. Judging how 1000 objects can be pulled in 6 minutes, this was probably not enforced.

  • retryWaitTime within the employee detail to be 300000

Are there any items that I should try? I was considering setting the schedule (cron) to start in middle of the hour, since I assume there are many other agents in SailPoint making calls to Dayforce, so it might be overloaded at the hour.

Any suggestion appreciated!

You’ve already done a lot of the right things—the 429s you’re seeing are a rate‑limit mismatch between how ISC SaaS connectors handle retries vs. how Dayforce enforces its sliding window quota. The VA connector masks this better because it serializes calls locally, but SaaS connectors push requests through the cloud retry logic, which isn’t always aligned with Dayforce’s throttling rules.

Why this happens

  • Dayforce quota: 100 requests per minute, sliding window.
  • Your aggregation pattern: One call to list employee IDs, then N calls per employee for details. This can easily exceed 100/minute.
  • SaaS connector retry logic: It retries on 429, but if the retry interval doesn’t match Dayforce’s sliding window, you still hit the limit.
  • VA connector: Runs locally, so pauses and throttling are more predictable.

Things to try

  1. Batch employee detail calls
  • Instead of one call per employee ID, see if Dayforce supports bulk detail retrieval (e.g., expand=Contacts,EmploymentStatuses for multiple IDs).
  • This reduces request volume dramatically.
  1. Tune retry strategy
  • autoRetryErrors should include 429.
  • Set cloudRetryInterval closer to Dayforce’s window (e.g., 60–65 seconds).
  • Keep maxRetryCount reasonable (10–15) to avoid long stalls.
  1. Throttle at connector level
  • Use a BeforeOperationRule to enforce a fixed delay between calls (e.g., 600–700 ms).
  • This ensures you never exceed ~100 calls/minute.
  • Your 1.5s pause may not have been applied consistently — test with logging to confirm.
  1. Schedule aggregation off‑peak
  • Your idea of starting mid‑hour is valid. If multiple tenants hit Dayforce at the top of the hour, you’ll collide with their shared quota.
  1. Pagination strategy
  • If the employee list is large, paginate with limit and offset instead of pulling all IDs at once.
  • This spreads requests across multiple minutes.
  1. Fallback to VA for heavy loads
  • If Dayforce’s API doesn’t support bulk detail calls, VA may remain the more stable option for large aggregations.

Hello Jeffrey,

I’m not an expert on Web Services SaaS, but I went through your post & spent some time digging through the docs & a couple of related threads. Sharing what I found in case it helps narrow this down.

First thing… the handling-rate-limits doc you referenced looks like it’s for custom SaaS connectors built using the Connector SDK, not the out-of-the-box Web Services SaaS connector. That doc is more about how you implement retry logic when you build your own connector with @sailpoint/connector-sdk. For the OOTB Web Services SaaS connector, that level of control doesn’t seem to be exposed.

From the Web Services connector docs, retry behavior for 429 is mainly driven by the Retry-After response header. It mentions:

  • Retry timing is based on Retry-After

  • Accepted range is 1–180 seconds (anything higher gets clamped to 180)

  • If the header isn’t present, it retries after 1 second

  • There’s also a limit on how many times it retries after a 429, so it won’t keep retrying indefinitely

References:

So one thing I would definitely check is what Dayforce is actually returning on a 429, especially whether Retry-After is present & what value it has. If that header isn’t coming back properly, the connector might just retry too early and hit the limit again.

On the config side, I am not sure all those attributes are actually doing anything in this flow. I came across a recent thread with the same HttpClientWrapper.ts rate limit message where retryWaitTime didn’t seem to be picked up during aggregation. I also couldn’t find cloudRetryInterval or apitimeout in the Web Services SaaS docs, so they might just be ignored here (could be wrong if there’s a doc reference for those).

On the VA vs SaaS part, from what I understand, VA-based Web Services lets you hook into execution closer to each call using Beanshell BeforeOperation / AfterOperation rules. In Web Services SaaS, you only get Connector Customizers with lifecycle hooks like beforeStdAccountList and afterStdAccountRead. Those wrap the aggregation/account operations, not individual child API calls inside the iterate-employee step. So a per-call delay like your 1.5s pause may not be applied the same way. Also, Thread.sleep() isn’t supported in ISC rules anyway (reference).

One more thing I’m thinking … if the flow is doing 1 call to list employees and then making separate calls per employee for details, that N+1 pattern alone can hit Dayforce’s 100-call sliding window pretty quickly regardless of retry config. If that’s the case, reducing the number of calls (bulk/expand/delta if available) might have more impact than tweaking retries.

Curious to see what ends up working on your side.

Appreciate both of your efforts and digging into this.

So I did most of the item on Shantha’s list. It increased the success rate from 10 to 50%.

It was Harish’ Retry-After logic that was finally accepted by the Web-Service Connector - specifically the OOTB Web Service Connector. Success rate is now up to 97% based on the last 48 hours aggregation.

I thought the connector-sdk is how they build their OOTB Web Service, so was hoping those configuration would work, but I think Harish’ point is that its an in-house Web-Service and that level of control is not exposed.. and its not based out of the conncetor-sdk.

I’ll slowly remove all the other retry timers based from the connector-sdk and see if it fails more. If the retry continues to stay at 95% +, then I think its the Retry-After logic that was missing.

Good news its not working flawlessly. Thank you both!