Web Services Connector – Reliable Pagination Pattern for Offset‑Only APIs (No Cursor / No Snapshot)

Hi Community,

I’m looking for guidance / confirmation on a pagination scenario we’ve hit with the Identity Now Web Services (REST) connector, specifically around offset‑based APIs with hard page limits and no cursor support.

API Characteristics

We’re integrating with an external SaaS API whose /users endpoint behaves as follows:

  • Offset‑based pagination only
    (limit + offset)

  • Hard cap: max 100 records per call (higher limits ignored)

  • No cursor / no links.next

  • Live dataset (no snapshot guarantee, ordering not explicitly stable)

  • Response structure is roughly:

    JSON

    {

    “userVOList”: [ … ],

    “totalResultsCount”: 185,

    “limit”: 100,

    “offset”: 0

    }

    Show more lines

The vendor has confirmed this is expected behavior: clients must explicitly call
offset = N * 100 to fetch additional pages.


What we tried in IdentityNow

We attempted multiple paging strategies:

  1. Built‑in OFFSET paging

    • Paging Type = OFFSET

    • Page Size = 100

    • Offset Param = offset

    • Limit Param = limit

    • Works intermittently but results can drift due to live ordering.

  2. Custom Paging Steps

    • Variations using:

      • $RECORDS_COUNT$

      • fixed increments (e.g., +100)

      • termination when < pageSize

    • All eventually hit:

      sailpoint.connector.ConnectorException:
      Index X out of bounds for length X
      
      
  3. Cursor / next‑link style paging

    • Not possible — API does not return links.next

    • Attempting $response.links.next$ results in:

      Final URL can not be null
      
      
  4. Rules (Before/After Operation, WebServiceAfterOperationRule)

    • Not invoked during account aggregation

    • No ability to control paging loop or issue additional GET calls

At this point it appears that custom paging logic inside the REST connector is not safe or supported for live offset‑based APIs.

Questions:

  1. Is this behavior (index‑out‑of‑bounds with custom paging steps) a known limitation of the IdentityNow Web Services connector?

  2. Is there any supported pattern inside IdentityNow for:

    • Offset‑only APIs

    • Hard page caps

    • No cursor / no snapshot

  3. Are there any roadmap items or undocumented hooks for safe aggregation‑time paging control, or is external snapshot ingestion the recommended approach in these cases?

Thanks,

Mahesh

Q1:
Yes — this is a known pain point with the Web Services connector. Custom paging steps + live offset-based APIs tend to break once the data shifts mid-aggregation. The connector really assumes stable paging input; when offsets drift, you get internal indexing errors like the one you’re seeing.

Q2:
Not really. IDN can do limit/offset, but only safely if the API has:

  • stable ordering, or

  • a cursor / marker, or

  • some kind of snapshot / delta filter

Without at least one of those, offset paging against live data is inherently unreliable and not something the connector handles well.

Q3:
No documented hooks. Connector rules don’t let you control the aggregation paging loop itself. In cases like this, the usual recommendation is:

  • external snapshot / staging service, or

  • push the vendor to add stable sorting, cursor paging, or delta filters.

Bottom line: what you’re seeing is expected behavior given the API limitations, not a misconfiguration on your side.

1 Like

I agree with @Swegmann that the connector is really set up for apis that have a stable ordering or snapshot of the data when using the offset paging.

Does the vendor offer any sory of filtering of the data returned that you could use? IE, could you aggregate based on first letter/number of the account id? Assuming that your current account total is low (185 in your example) this could allow you to make several aggregation calls that would then not require paging, or only require it once or twice. This is not idea for large datasets, and you would need an HTTP Operation for each character sequence, but it may be a solution. You could also do that all in an After Operation Rule if you wanted to only have one HTTP Operation.

1 Like

Thanks for detailed explanation.

Thanks for the suggestion. We evaluated this approach.

Unfortunately, the RFPIO /users API does not support any server‑side filtering (prefix, startsWith, search, etc.). It only supports limit and offset, so we can’t partition the dataset before pagination.

Also, in IdentityNow, AfterOperation and AfterAggregation rules run after aggregation and can’t issue additional GET calls or control paging, so they can’t be used to orchestrate multiple aggregation calls.

Given an offset‑only, live API with no snapshot or filtering, we’re thinking of an external snapshot ingestion approach to stabilize the data before IdentityNow consumes it.

Appreciate the idea — it would work well if the vendor supported filtering.

-Mahesh

" safe aggregation‑time paging control"

The control has to come from the API endpoint / server side. An API client / consumer is not a logical point for control implementation, but rather, it’s a beneficiary party of such control. This is an API endpoint maturity matter, IMO.

1 Like

Is this the API that you are working from?

Are you able to make use of the Last Active To/From Date to potentially reduce the number of users that you are getting to only those that are active? Unsure if this would get any users added/removed, but something that might be worth looking into.

Yes, unfortunately that use case doesn’t worked.

Hi Everyone,

Thanks a lot for your suggestions and guidance on this issue. I was able to resolve the responsiveness problem by implementing pagination along with an after‑aggregation filtering approach.

Pagination approach

  • Created two account aggregation calls:

    • First call: records 0–100

    • Second call: offset = 100, page size = 100

    • Paging Steps - $totalCount$ = $totalCount$ + $RECORDS_COUNT$ TERMINATE_IF $totalCount$ >= $response.totalResultsCount$ $endpoint.fullUrl$ = $application.baseUrl$ + $endpoint.relativeUrl$ + “?offset=” + $totalCount$

  • This ensured all records were fetched correctly without missing or duplicating users.

  • Filtering approach

    • API‑level filtering on user status was not working reliably in this case.

    • To handle this, I used an After Operation Rule to:

      • Filter users based on allowed statuses (e.g., ACTIVE, PENDING_ACTIVATION)

      • Flatten single‑value arrays

      • Enforce required attributes (userName, id)

      • De‑duplicate accounts based on userName

And after Aggregation rule -

import java.util.*;

log.info(“Responsive After Aggregation Rule starting”);

if (processedResponseObject == null) {
log.info(“Responsive After Aggregation: processedResponseObject is null; returning empty data.”);
Map emptyReturn = new HashMap();
emptyReturn.put(“data”, new ArrayList());
return emptyReturn;
}

Set allowedStatuses = new HashSet();
allowedStatuses.add(“ACTIVE”);
allowedStatuses.add(“PENDING_ACTIVATION”);

List attrsToFlatten = Arrays.asList(new String{
“firstName”,“lastName”,“phoneNumber”,“jobTitle”,“timeZone”,
“language”,“location”,“id”,“userName”,“userRole”,“status”
});

List requiredAttrs = Arrays.asList(new String{“userName”,“id”});
String dedupeKeyAttr = “userName”;

Object flattenIfSingleArray(Object val) {
if (val instanceof List) {
List list = (List) val;
if (list.size() == 1) { return list.get(0); }
}
return val;
}

boolean isBlank(Object v) {
if (v == null) return true;
String s = String.valueOf(v).trim();
return s.length() == 0;
}

Map dedupeMap = new HashMap();
List kept = new ArrayList();
int droppedMissing = 0, droppedStatus = 0, droppedDup = 0;

for (Iterator itRows = ((List) processedResponseObject).iterator(); itRows.hasNext(){
Object obj = itRows.next();
if (!(obj instanceof Map)) { droppedMissing++; continue; }

Map m = (Map) obj;
Map norm = new HashMap();

for (Iterator itA = attrsToFlatten.iterator(); itA.hasNext(){
String a = (String) itA.next();
Object v = m.get(a);
Object nv = flattenIfSingleArray(v);
norm.put(a, nv);
}

for (Iterator itK = m.keySet().iterator(); itK.hasNext(){
Object kObj = itK.next();
String k = String.valueOf(kObj);
if (!norm.containsKey(k)) {
norm.put(k, flattenIfSingleArray(m.get(k)));
}
}

boolean requiredOk = true;
for (Iterator itReq = requiredAttrs.iterator(); itReq.hasNext(){
String req = (String) itReq.next();
if (isBlank(norm.get(req))) { requiredOk = false; break; }
}
if (!requiredOk) { droppedMissing++; continue; }

Object statusObj = norm.get(“status”);
String status = (statusObj == null) ? null : String.valueOf(statusObj).trim();
if (status == null || !allowedStatuses.contains(status)) { droppedStatus++; continue; }

Object keyObj = norm.get(dedupeKeyAttr);
String keyRaw = (keyObj == null) ? “” : String.valueOf(keyObj);
String key = keyRaw.trim().toLowerCase();
if (key.length() == 0) { droppedMissing++; continue; }

if (dedupeMap.containsKey(key)) { droppedDup++; continue; }

dedupeMap.put(key, new Integer(kept.size()));
kept.add(norm);
}

log.info(“Responsive After Aggregation: input=” + ((List) processedResponseObject).size()“, kept=” + kept.size()“, droppedMissing=” + droppedMissing“, droppedStatus=” + droppedStatus“, droppedDup=” + droppedDup);

Map returnMap = new HashMap();
returnMap.put(“data”, kept);
log.info(“Responsive After Aggregation Rule exiting”);
return returnMap;

Thanks,

Mahesh

2 Likes

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.