Continuous HTTP PATCH retries causing 429 rate limit on Web Services app after certification revocation (IdentityIQ 8.4)

Issue:
After a certification campaign revoked entitlements on a Web Services application, IdentityIQ started sending repeated HTTP PATCH calls to the configured endpoints. These retries continue across different webservice groups, causing HTTP 429 (rate limit) errors and blocking aggregation. Attempts to stop the retries (disabling request processors, enabling maintenance mode, restarting task servers, cleaning request objects) have not worked.

Current setup:

  • Request Definition uses sailpoint.request.IntegrationRequestExecutor with retryMax=20.

  • Application-level provisioning retries set to 1 (but seems ignored during certification).

  • Logger in after-operation rule shows continuous PATCH payloads.

Questions:

  • Which process or task triggers these retries in certification-driven provisioning?

  • How can we immediately stop the retry loop?

  • What’s the best way to prevent this in future (e.g., retry/backoff settings, workflow changes)?

below 2 articles in sailpoint community were explaining the similar issue

https://community.sailpoint.com/t5/IdentityIQ-Forum/Provisioning-Retry-Behavior/m-p/206529

https://community.sailpoint.com/t5/IdentityIQ-Forum/Retry-doesn-t-appear-to-be-using-application-co…

Any guidance or best practices would be appreciated.

@vinnysail

What triggers the retries during certification-driven provisioning?

In IdentityIQ, retries in provisioning happen due to how the IntegrationRequestExecutor handles failures:

  • The IntegrationRequestExecutor (used in your request definition) has retryMax and retryInterval parameters.

    • In your setup: retryMax=20 → it will try up to 20 times before giving up.

    • This is independent of application-level retry settings.

  • Certification-driven provisioning (like revoking entitlements) creates request objects for each entitlement.

    • If the provisioning operation fails (e.g., HTTP 429), IIQ marks the request as in progress and schedules a retry according to IntegrationRequestExecutor.
  • Retries continue even if application-level retries are 1, because IntegrationRequestExecutor has its own retry logic.

Key point: The retry loop is being triggered by the request executor in the context of the certification workflow, not by the standard provisioning engine task.

How to immediately stop the retry loop?

Since usual methods (disabling request processors, maintenance mode, restarting servers, cleaning request objects) didn’t work, you need to target the retry executor directly:

Immediate actions:

  1. Cancel all pending integration requests:

  2. SELECT * FROM RequestObject

  3. WHERE status = ‘IN_PROGRESS’

  4. AND application = ‘YourWebServiceApp’;

Then mark them as FAILED manually. This stops the executor from retrying.

  1. Temporarily disable the Request Definition:

    • Set enabled=false on the IntegrationRequestExecutor or in the request definition used by certification.

    • This prevents new requests from being retried.

  2. Throttle the HTTP requests at the endpoint:

    • If you can’t stop the executor fast enough, configure a temporary rate limit or maintenance mode on the web service to absorb repeated PATCH calls.
  3. Check the after-operation rule:

    • You mentioned your logger shows continuous PATCH payloads.

    • If the after-operation rule is triggering provisioning in a loop, temporarily disable it.

Summary: Manually mark stuck requests as FAILED, disable the executor temporarily, and pause the app endpoint if needed.


How to prevent this in future?

Best Practices:

  1. Tweak retry/backoff settings in IntegrationRequestExecutor:

    • Use retryMax + retryInterval + backoff to prevent rapid-fire retries.

    • Example:

    • <property name="retryMax" value="3"/>
      
    • <property name="retryInterval" value="300"/> <!-- seconds -->
      
    • <property name="backoff" value="2"/> <!-- exponential backoff -->
      

This avoids overloading your Web Services endpoint.

  1. Use “soft-fail” on HTTP 429 or handle with workflow rules:

    • Implement logic in after-operation or workflow rules to detect HTTP 429 and pause or reschedule provisioning instead of retrying immediately.
  2. Separate certification from high-impact provisioning:

    • For sensitive endpoints, configure different request definitions with limited retries or manual approval.

    • This avoids certification triggering automated retries on critical APIs.

  3. Enable auditing/logging for request retries:

    • Track which requests are being retried and why (status codes, error messages).

    • Helps identify issues before they become loops.

  4. Clean up stuck requests periodically:

    • Some teams create a scheduled task to fail “stuck” integration requests older than X hours.

The key takeaway: IntegrationRequestExecutor retries are separate from application provisioning retries, and must be managed through its own configuration or by canceling in-progress requests.