I have a tenant with over 10,000 entitlements, and I need to retrieve and store this data quickly using Python. Currently, my approach is quite slow and not efficient for handling such a large volume of data.
Does anyone have any suggestions or best practices for efficiently calling and storing a large number of entitlements in Python? Are there specific libraries or techniques that could help speed up this process?
@VasanthRam - As @GOKUL_ANANTH_M mentioned for pagination the limit will be 1000 for entitlements, you can fetch 250 entitlements parallelly using threads to be more efficient!
To handle a large number of entitlements efficiently, you can use Python’s concurrent.futures module to implement concurrency with multiple workers. For IdentityNow, the maximum API hit rate is 100 requests per 10 seconds. You can utilize this by using up to 100 concurrent workers. Here’s a basic example of how you can achieve this:
python:
import concurrent.futures
import requests
def fetch_entitlement(entitlement_id):
url = f"[https://your-api-endpoint/{entitlement_id}"]
response = requests.get(url)
return response.json()
def main(entitlement_ids):
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:
results = list(executor.map(fetch_entitlement, entitlement_ids))
return results
if __name__ == "__main__":
entitlement_ids = [f"id_{i}" for i in range(10000)] # Replace with your actual IDs
entitlements = main(entitlement_ids)
# Now `entitlements` contains all your retrieved data
print(entitlements)
In this example:
fetch_entitlement is a function that makes the API call for a given entitlement ID.
main uses a ThreadPoolExecutor with 100 workers to fetch entitlements concurrently.
The results list will store all the retrieved entitlements.
This approach should significantly speed up the process. Just ensure you handle exceptions and rate limiting as needed.