We have a very similar difficulty at the current Identity Now project for custom reporting. I am trying to use complex Powershell for that being not very satisfied yet in results.
It’d be a real help if you could kindly share your Python script with me too, it would help a lot for our task.
Very interested! We’ve struggled with lack of reporting since day 1 and are constantly frankensteining things together for Ernst & Young and the completeness and accuracy are a nightmare. Would love to solve it with some better scripts. Each thing we ask for we’ve been sent scripts to use in Postman and Ruby rather than the system being built to work well for public audits and this limitation broke all our reports.
I make one api call and make sure count= true. This will then give me the total number of enteries that the search has. I then use this to compare to the offset number and then for each page I increase the offset by 250. Using this method I was able to create a report of over 8K records pulled. Hope this helps.
You can take it one step further, and do async calls to the endpoint. This means you can send all the paged requests to the endpoint at once, and get the results asynchronously. I was able to do this with Python and asyncio.
Careful with this approach with V3 endpoints or you’ll hit the 10k limit too. Don’t use offset, use searchAfter on a sorted search instead. This is a limit coming from Elasticsearch as mentioned here: Search | SailPoint Developer Community
So is this saying the Elasticsearch would only ever return 10K records at a time? From what I understood I thought the 10K limit was the number of api calls that could be within a given time. Even if I started using the SearchAfter in my case I would still be making the same number of calls to the search endpoint.
This means you’d rather use searchAfter to paginate a V3 call if you suspect the total number of items could exceed 10K. If you paginate with offset, you won’t be able to fetch the 10001st record and onwards because of how Elasticsearch works. However, using searchAfter and a sorted query moves the 10K limit start index to the last item of your last query (or rather, you should make it so), so to speak. It’s like a rolling window.
I don’t use a public git repo so just gonna post it here for now - don’t see myself updating it much anyway. Also - hope it’s readable and if you have any feedback, feel free to shoot. Not a python expert in any capacity so always happy to improve.
import csv
import re
import requests as r
import json
import os
# Update the below values to match your installation
sp_api_url = "https://<tenant>.api.identitynow.com"
clientid = ""
secret = ""
# Get a bearer token
def get_sp_api_token():
body = {
"grant_type": "client_credentials",
"client_id": clientid,
"client_secret": secret
}
response = r.post(sp_api_url + "/oauth/token", body)
if response.ok:
print("API access token obtained")
return response.json()['access_token']
else:
print(response.json())
print("Error while obtaining access token")
# get all controlled applications, store name, ID & description
def get_all_apps():
apps = []
response = r.get(sp_api_url + "/cc/api/app/list?filter=org", headers=apiCallHeaders)
if response.ok:
print("Applications found")
for item in response.json():
if item['controlType'] != "PERSONAL":
print(item['name'])
apps.append({"appid": item['id'], "appName": item['name'], "appDescription": item['description']})
else:
print("Error fetching applications: ")
print(response)
return apps
# get all access profiles assigned to each application. Store access profiel name, description and approvalSchemes
def get_app_access_profiles():
apps = applications
for app in applications:
response = r.get(sp_api_url + "/cc/api/app/getAccessProfiles/" + app['appid'], headers=apiCallHeaders2)
if response.ok:
if response.json()['count'] != 0:
print("Found " + str(response.json()['count']) + " access profiles for " + app['appName'])
accessprofiles = []
for item in response.json()['items']:
accessprofiles.append({"accessprofile": item['name'], "description": item['description'],
"approvals": item['approvalSchemes']})
app["accessprofiles"] = accessprofiles
else:
print("No access profiles assigned to " + app['appName'])
else:
print("Error fetching access profiles for " + app['appName'])
print(response)
return apps
# get membership of each governance group per access profile - you could optimise this and fetch all governance groups separately and store in a variable to iterate over later. At the moment this fetches all approvers and also updates the apps list
def get_workgroup_membership():
apps = access_profiles
for app in apps:
if "accessprofiles" in app:
for ap in app['accessprofiles']:
if ap['approvals'] is not None:
if "," in str(ap['approvals']):
ap.update({"approvals": ap['approvals'].split(",")})
workgroups = []
for approver in ap['approvals']:
if "workgroup" in approver:
members = []
workgroup = r.get(sp_api_url + "/v2/workgroups/" + (approver.split(":")[1]).strip(), headers=apiCallHeaders2)
membership = r.get(sp_api_url + "/v2/workgroups/" + (approver.split(":")[1]).strip() + "/members", headers=apiCallHeaders2)
for member in membership.json():
members.append(member['email'])
workgroups.append({workgroup.json()['name']: members})
else:
workgroups.append(approver)
ap['approvals'] = workgroups
else:
if "workgroup" in ap['approvals']:
members = []
workgroup = r.get(sp_api_url + "/v2/workgroups/" + (ap['approvals'].split(":")[1]).strip(), headers=apiCallHeaders2)
membership = r.get(sp_api_url + "/v2/workgroups/" + (ap['approvals'].split(":")[1]).strip() + "/members", headers=apiCallHeaders2)
for member in membership.json():
members.append(member['email'])
ap['approvals'] = {workgroup.json()['name']: members}
print("Approval values updated with governance group names and membership")
appson = apps
return appson
# save to file
def save_file():
with open('data.json', 'w') as file:
json.dump(alldata, file, indent=4)
return
token = get_sp_api_token()
apiCallHeaders = {'Authorization': 'Bearer ' + token, 'Content-Type': 'application/json'}
apiCallHeaders2 = {'Authorization': 'Bearer ' + token}
applications = get_all_apps()
access_profiles = get_app_access_profiles()
alldata = get_workgroup_membership()
save_file()
We appreciate your feedback and audit reporting is a high priority feature for our Product Management team. We currently have new reporting and dashboarding for auditors in beta and we would very much appreciate your direct feedback as well as participation in focus groups.
We are trying to use this tool to create several Access Profiles with quite a few entitlements for each. We build our import tool as outlined but we receive this error:
Entitlements Hash Map Creation is a Failure!
Error: 400
Total number of Entitlements of this source: 526
Upon further inspection of the script, the limit for the endpoint is set to 250 unless the entitlement count of the source is higher, it then uses that entitlement count as the limit, which is what is causing this error.
uri = https://{tenant}.api.identitynow.com/cc/api/entitlement/list?limit=526&CISApplicationId={source}
Will there be an update to this tool for pagination?
Further, does this endpoint even support pagination/offset? I tried making this call through Postman and if I include the ‘offset’ parameter, it does not seem to actually work as a valid parameter but I do not receive an error for including it.
I’ve updated one of them to use beta apis instead. Perhaps you could look at modifying others similarly.
Attaching the script I’ve updated along for reference. (Please note you’ll have to change the file extension back to .rb. Had to change it to txt as this forum doesn’t allow .rb uploads) manageRolesAndAccessProfileV8.0.1.txt (52.7 KB)