Hi all,
Suppose I have a source from a web service connector. I take a look at its cluster in the JSON and see this reference:
"cluster":{"type":"CLUSTER","id":"0b0dbb6ea6c9428284d5d020bcb00d0a","name":"my-cluster"}
I can then see the content of this cluster using the API endpoint GET /v3/managed-clusters/:id
Similar I have a second source, with this in the JSON:
"cluster":{"type":"CLUSTER","id":"7a171dd8957b406880d91497750f883a","name":"sp_connect_proxy_cluster"}
Again I can then see the content of this cluster using the same API endpoint GET /v3/managed-clusters/:id
.
However, if I now use this endpoint GET /v3/managed-clusters
, I get a list of clusters that does contain the first one, but does not contain the second one.
This is inconsistent. If we should be able to get the object by giving the id, it should also show up on the list all endpoint.
This has an impact on our scripts. In general if you call the endpoint to list all objects (sources, transforms, workflows etc.) and you apply pagination, you know that you have all objects, which took a lot less API calls than getting them one by one. So once you have all objects, you know that any id not in that list does not exist (anymore). However, here this turns out not to be the case.
This looks like a bug to me. Would you also consider it a bug?
I lean to yes: if any object is retrievable through /v3/pluralized-noun/:noun-id
, it should in my opinion be also visible under /v3/pluralized-noun
(taking into account pagination of course).
One could argue that the cluster belonging to the SaaS connector should not appear in the list API because it is not managed
by the customer, and they would then have a point there. However, since we still need to see the content, I would say it would still need to show up in the API list, and it would have to be visible similar to how internal transforms are visible.
So my preferred solution would be to rename the endpoint to
/v3/clusters
, and show all of them, where you add the potential filter to be called like /v3/clusters?filters= managed eq true
to get rid of “SaaS based” clusters.
After all the objects are all of type cluster anyway as you can see in the source JSON. Not of type managed-cluster.
And if that is not possible, I would prefer all of them to still be visible under the list API without rename. Otherwise we would not be able to use the API to create a SaaS source, for it needs this cluster id to be attached for it to work, and we don’t know the id without using the get all objects endpoint (and filter by name)
So the cluster that you are not seeing is the one for the SaaS connector, which, as you noted, is not a customer managed cluster. From a product standpoint, they will probably consider this an enhancement instead of a bug, but I will verify. They will likely want to know why you need the cluster information for a SailPoint managed cluster.
I actually just ran into this same issue. I would assume product would also say this is by design, similar to how they do not include default SailPoint cloud rules in SP-Config export, since this is some backend proxy cluster for SaaS connectors.
In order to circumvent this when creating a new SaaS source programmatically, you need to send the POST call with barebones info with no cluster included, allow the full source to then get created in the backend, and then you can fetch that cluster information from the source after that. This post helped figure out this process: Issue with Okta SaaS Connector: Test Connection Fails When Created via API - #10 by swamy97. This is essentially how it is done when you create a source via the UI.
So if I have a role with id X and I am searching for GET /v3/access-profiles/X
, I would rightfully so expect a 404 NotFoundError, since it is not an access profile.
So since you are saying that the SaaS cluster (with id Y) is not a managed cluster, and therefore rightfully does not show up in GET /v3/managed-clusters
, I would also expect a 404 NotFoundError when calling GET /v3/managed-clusters/Y
, since no “managed cluster” with id Y exists.
There are two reasons why we would need to be able to view the SaaS cluster similar two the other clusters.
-
Like Patrick mentioned. A common way to create sources (whether manually or partially through a script) is to first create it in the sandbox tenant, ensure everything is working, and then get the JSON of that source, copy it an put it as content for the POST /v3/sources
endpoint, where we do update values that needs to be updated such as Description, URL, et cetera, and afterwards update the other source related objects such as schemas, provisioning policies etc. One part of this is to ensure we are pointing to the right cluster. In sandbox, the cluster we use with the same name has a different id than production. So we can, before creating the source, change the id of the cluster to the one matching production. How do we know which id to use? We call the GET /v3/managed-clusters
endpoint, take the one that we need (which we can determine based on name) and then take the id of that to put in the source JSON when creating. This strategy is and was working for each source, until the SaaS sources were introduced, since those clusters have been omitted from this API. So strategies that previously worked, are now not working, because we can’t find this cluster in the list to fetch the id. The workaround @patrickboston mentioned is working, but is a strange workaround anyway that should not be needed to begin with.
-
What if we want to use the API to update a source password (secret, token etc.)? We could choose to plaintext send the password in the source JSON, but that is bad practice. What we should do is encrypt the password locally, and then send the encrypted password through the API. In order to encrypt the password, we require the public key, that can be found in the cluster JSON. So we can use this to update credentials on non-SaaS based sources. We need to do this for SaaS based sources as well since they have credentials as well. So for this we would need to be able to fetch the cluster JSON. So we would need to do something like GET /v3/clusters/Y
. Note that even the UI is actually fetching the non-customer-managed clusters through GET /v3/managed-clusters/Y
. Probably because they realized the need of this public key as well to encrypt the password in the UI session. It looks that they added the non-customer-managed cluster to the managed-cluster API to save the need to /v3/clusters/Y
, while this is actually what we would need.
I would disagree that this should be considered an enhancement request, but rather a fix request.
We originally had this functionality:
1: For any source, you can easily use the POST /v3/sources
API to directly create a source object, including cluster, configurations, etc.
2: For any source, you can easily use the source API to securely update credentials.
Then a new source type gets added and it was added in such a way that this previous functionality is not valid anymore. I don’t want the sources to be enhanced such that it can offer more functionality. I want the sources to be fixed such that it can still offer the same functionality.
I hope this makes sense. I view this as another example of the pattern of new ISC functionality not respecting already existing ISC functionality:
There is a search event for whenever an email is send by ISC, such that you can use this for audit/research purposes. Workflows gets introduced with send email capabilities and now search events is not always showing anymore when emails get send. Or managers can revoke access, then segmentation is added for access requests to ensure not everyone can request access → managers can’t revoke access anymore if they are not in that segment (has been fixed already).