Enhancements: Updates to API Paging Limitations

to @M_rtenH
Hi Martin,

We have a very similar difficulty at the current Identity Now project for custom reporting. I am trying to use complex Powershell for that being not very satisfied yet in results.
It’d be a real help if you could kindly share your Python script with me too, it would help a lot for our task.

Thanks in advance.

Hi @kenilelk1
For PowerShell, have you looked at GitHub - yannick-beot-sp/powershell_module_identitynow: SailPoint IdentityNow PowerShell Module or GitHub - darrenjrobinson/powershell_module_identitynow: SailPoint IdentityNow PowerShell Module
I have implemented pagination logic in powershell_module_identitynow/Get-IdentityNowPaginatedCollection.ps1 at master · yannick-beot-sp/powershell_module_identitynow · GitHub
and use it in powershell_module_identitynow/Get-IdentityNowAccessProfile.ps1 at master · yannick-beot-sp/powershell_module_identitynow · GitHub for Access profiles for instance.

3 Likes

Very interested! We’ve struggled with lack of reporting since day 1 and are constantly frankensteining things together for Ernst & Young and the completeness and accuracy are a nightmare. Would love to solve it with some better scripts. Each thing we ask for we’ve been sent scripts to use in Postman and Ruby rather than the system being built to work well for public audits and this limitation broke all our reports.

1 Like

I would be interested in obtaining this as well please

1 Like

In all honesty - the “audit capabilities” in IDNow are SEVERLY lacking and inferior to any of the other IAM tools I’ve worked with.

2 Likes

I would also be interested if you are making it available. Thanks.

1 Like

Here is a way I got around the 250 page limit to pull all objects from search.

while($OffSet -le $TotalCount)
{
Write-Debug “While loop is working $($OffSet)”
$SearchUrl = “https://$($Tenent).api.identitynow.com/v3/search?offset=$($OffSet.ToString())&count=true”
$Response = Invoke-WebRequest -Uri $($SearchUrl) -Headers $($headers) -Method POST -Body $($Body)
$Content += $Response.content
$OffSet = $OffSet + 250
}

I make one api call and make sure count= true. This will then give me the total number of enteries that the search has. I then use this to compare to the offset number and then for each page I increase the offset by 250. Using this method I was able to create a report of over 8K records pulled. Hope this helps.

Indeed, that is what we do.

You can take it one step further, and do async calls to the endpoint. This means you can send all the paged requests to the endpoint at once, and get the results asynchronously. I was able to do this with Python and asyncio.

Careful with this approach with V3 endpoints or you’ll hit the 10k limit too. Don’t use offset, use searchAfter on a sorted search instead. This is a limit coming from Elasticsearch as mentioned here: Search | SailPoint Developer Community

HTH.

So is this saying the Elasticsearch would only ever return 10K records at a time? From what I understood I thought the 10K limit was the number of api calls that could be within a given time. Even if I started using the SearchAfter in my case I would still be making the same number of calls to the search endpoint.

This means you’d rather use searchAfter to paginate a V3 call if you suspect the total number of items could exceed 10K. If you paginate with offset, you won’t be able to fetch the 10001st record and onwards because of how Elasticsearch works. However, using searchAfter and a sorted query moves the 10K limit start index to the last item of your last query (or rather, you should make it so), so to speak. It’s like a rolling window.

Makes sense?

I think I am understanding. Do you have an example on how this would be used?

Here’s an example using Powershell:

For($i=0;$i -le $Pagecount;$i++){

#Search query

$searchBody= '{“query”: {“query”: “created:[now-1d TO now] AND type:*”},

“sort”:[“id”],

“searchAfter”:[“’ + $searchAfter + '”]

}’

write-output “Page $i of $Pagecount; Last ID from previous set: $searchAfter”

$Results = Invoke-WebRequest -Method POST -Uri https://$($org).api.identitynow.com/v3/search/events?limit=1 -Headers @{Authorization= “Bearer $token”} -body $searchBody -ContentType “application/json”

$Content= $Results.Content.Substring($Results.content.indexof(“[”)+1, $Results.content.LastIndexOf(“]”)-1)

$Content | out-file “C:\Users\justin.haines\OneDrive - Optiv Security Inc\xyz\Scripts\Events.json” -Append

$searchAfter= $Content.substring($Content.IndexOf(“id`”:")+6,31)

}

1 Like

cc @kenilelk1 @rrivera1109 @RArroyo

I don’t use a public git repo so just gonna post it here for now - don’t see myself updating it much anyway. Also - hope it’s readable and if you have any feedback, feel free to shoot. Not a python expert in any capacity so always happy to improve.

import csv
import re
import requests as r
import json
import os

# Update the below values to match your installation
sp_api_url = "https://<tenant>.api.identitynow.com"
clientid = "" 
secret = ""


# Get a bearer token
def get_sp_api_token():
    body = {
        "grant_type": "client_credentials",
        "client_id": clientid,
        "client_secret": secret
    }
    response = r.post(sp_api_url + "/oauth/token", body)
    if response.ok:
        print("API access token obtained")
        return response.json()['access_token']
    else:
        print(response.json())
        print("Error while obtaining access token")


# get all controlled applications, store name, ID & description
def get_all_apps():
    apps = []
    response = r.get(sp_api_url + "/cc/api/app/list?filter=org", headers=apiCallHeaders)
    if response.ok:
        print("Applications found")
        for item in response.json():
            if item['controlType'] != "PERSONAL":
                print(item['name'])
                apps.append({"appid": item['id'], "appName": item['name'], "appDescription": item['description']})
    else:
        print("Error fetching applications: ")
        print(response)
    return apps


# get all access profiles assigned to each application. Store access profiel name, description and approvalSchemes
def get_app_access_profiles():
    apps = applications
    for app in applications:
        response = r.get(sp_api_url + "/cc/api/app/getAccessProfiles/" + app['appid'], headers=apiCallHeaders2)
        if response.ok:
            if response.json()['count'] != 0:
                print("Found " + str(response.json()['count']) + " access profiles for " + app['appName'])
                accessprofiles = []
                for item in response.json()['items']:
                    accessprofiles.append({"accessprofile": item['name'], "description": item['description'],
                                           "approvals": item['approvalSchemes']})
                app["accessprofiles"] = accessprofiles
            else:
                print("No access profiles assigned to " + app['appName'])
        else:
            print("Error fetching access profiles for " + app['appName'])
            print(response)
    return apps


# get membership of each governance group per access profile - you could optimise this and fetch all governance groups separately and store in a variable to iterate over later. At the moment this fetches all approvers and also updates the apps list
def get_workgroup_membership():
    apps = access_profiles
    for app in apps:
        if "accessprofiles" in app:
            for ap in app['accessprofiles']:
                if ap['approvals'] is not None:
                    if "," in str(ap['approvals']):
                        ap.update({"approvals": ap['approvals'].split(",")})
                        workgroups = []
                        for approver in ap['approvals']:
                            if "workgroup" in approver:
                                members = []
                                workgroup = r.get(sp_api_url + "/v2/workgroups/" + (approver.split(":")[1]).strip(), headers=apiCallHeaders2)
                                membership = r.get(sp_api_url + "/v2/workgroups/" + (approver.split(":")[1]).strip() + "/members", headers=apiCallHeaders2)
                                for member in membership.json():
                                    members.append(member['email'])
                                workgroups.append({workgroup.json()['name']: members})
                            else:
                                workgroups.append(approver)
                        ap['approvals'] = workgroups
                    else:
                        if "workgroup" in ap['approvals']:
                            members = []
                            workgroup = r.get(sp_api_url + "/v2/workgroups/" + (ap['approvals'].split(":")[1]).strip(), headers=apiCallHeaders2)
                            membership = r.get(sp_api_url + "/v2/workgroups/" + (ap['approvals'].split(":")[1]).strip() + "/members", headers=apiCallHeaders2)
                            for member in membership.json():
                                members.append(member['email'])
                            ap['approvals'] = {workgroup.json()['name']: members}
    print("Approval values updated with governance group names and membership")
    appson = apps
    return appson


# save to file
def save_file():
    with open('data.json', 'w') as file:
        json.dump(alldata, file, indent=4)
    return


token = get_sp_api_token()
apiCallHeaders = {'Authorization': 'Bearer ' + token, 'Content-Type': 'application/json'}
apiCallHeaders2 = {'Authorization': 'Bearer ' + token}
applications = get_all_apps()
access_profiles = get_app_access_profiles()
alldata = get_workgroup_membership()
save_file()

Here on LinkedIn

We appreciate your feedback and audit reporting is a high priority feature for our Product Management team. We currently have new reporting and dashboarding for auditors in beta and we would very much appreciate your direct feedback as well as participation in focus groups.

Is there some kind of workaround for the Bulk AccessProfile and Role Importer?

We are trying to use this tool to create several Access Profiles with quite a few entitlements for each. We build our import tool as outlined but we receive this error:
Entitlements Hash Map Creation is a Failure!
Error: 400
Total number of Entitlements of this source: 526

Upon further inspection of the script, the limit for the endpoint is set to 250 unless the entitlement count of the source is higher, it then uses that entitlement count as the limit, which is what is causing this error.

uri = https://{tenant}.api.identitynow.com/cc/api/entitlement/list?limit=526&CISApplicationId={source}

Will there be an update to this tool for pagination?
Further, does this endpoint even support pagination/offset? I tried making this call through Postman and if I include the ‘offset’ parameter, it does not seem to actually work as a valid parameter but I do not receive an error for including it.

Hi Zach,

I’ve updated one of them to use beta apis instead. Perhaps you could look at modifying others similarly.

Attaching the script I’ve updated along for reference. (Please note you’ll have to change the file extension back to .rb. Had to change it to txt as this forum doesn’t allow .rb uploads)
manageRolesAndAccessProfileV8.0.1.txt (52.7 KB)

Reading Fernando’s replay, we may be out of luck since we go over 10k.

Can you detail the changes we need to make? What needs done to pull 10-15,000?