Failsafe mechanisms to mitigate outages in the authoritative source that lead to role deprovisioning

The vendor responsible for the authoritative source of one of our clients had a critical outage that caused 4 attributes to be aggregated as empty for a couple of hours. This incident caused serious hindering in their business activities, given that their role assignment criteria depended on these attributes being correctly filled (think about attributes such as department). In short, roles were deprovisioned for most of the employees for a couple of hours.

We are aware that IdentityNow operates under the assumption that the authoritative source is reliable and stable, and outages on the vendor side are outside of our control. Still, the client has asked whether there are some mitigating measures that we could take in IdentityNow to prevent a future potential outage to have such a critical impact on the business operations.

I would like to ask the community whether you’ve encountered this issue as well, or whether someone has suggestions we could explore.

Any help is greatly appreciated, thank you in advance!

In the connector definition, the account delete threshold can be set to avoid the mass delete scenario.

Dear Kerry,
Thank you for your contribution, but I’m not sure how that applies to the use case at hand. The accounts were never deleted. Instead, there was a problem on the backend that caused four of the key attributes to aggregate as empty, so there was no massive account deletion that could have triggered IdentityNow to stop the aggregation due to the threshold being exceeded.

Is it possible to address this potential issue at the source/vendor? For example, if the source data is known to be incomplete or unreliable deny the aggregation attempt until the source is healthy again? (I’m guessing this is a web services source). I can’t think of a good way to configure some kind of mitigation strategy exclusively in IDN that wouldn’t have significant impacts on standard functionality and/or create considerably more work elsewhere.

Hi Irene,
What is the type of authoritative source connector in your case?

We used with success two different types for pre-checking and attribute correction of authoritative source data flow, it depends on what type of source connector is authoritative.

With kind regards,
Dmitri

1 Like

Thank you for your response. The client has indeed already addressed the vendor. Mitigation procedures in IdentityNow is just an additional safety later that they would like to add, would that be available.

1 Like

Thank you for your response Dimitri.

It’s a web services connector. Which connector did you use for your authoritative source?

Kind regards,

Irene

Hi Irene,
We designed and used different BuildMap Rules to make a relatively complex critical attribute substitution and attribute protection during the aggregation for different SAP and JDBC authoritative sources. In one of our use cases we implemented an intelligent amending in a BuildMap Rule for the managerID substitution in different critical use cases and/or assigned a LCM special state if an account has missing some critical attributes from a source…

Within another scenario we did a full script based pre-processing outside of IDN to protect data from cases with missing critical data of some SAP system flat text/CSV file with a quite rich multi-scenario. The script, when finished, runs the authoritative source aggregation via API. Any UI-based authoritative source aggregation in IDN was being switched off.

In your case you might not need to design such complex solutions, instead you could use an identity attribute previous value and a simple Transform.

Case 1. Protection of essential attribute using its previous value:
======Transform for Case 1=================

{
    "name": "critical_Attribute1_Protected",
    "type": "static",
    "attributes": {
        "requiresPeriodicRefresh": "true",
        "critical_Attribute1": {
            "attributes": {
                "values": [
                    {
                        "attributes": {
                            "attributeName": "critical_Attribute1",
                            "sourceName": "My_Authoritative_SourceName"
                        },
                        "type": "accountAttribute"
                    },
                    "null"
                ]
            },
            "type": "firstValid"
        },
        "critical_Attribute1_Past": {
            "attributes": {
                "values": [
                    {
                        "attributes": {
                            "name": "critical_Attribute1" <== previous non-empty attribute value kept at the identity
                        },
                        "type": "identityAttribute"
                    },
                    "null"
                ]
            },
            "type": "firstValid"
        },
        "value": "#if($critical_Attribute1=='null'&&$critical_Attribute1_Past!='null')$critical_Attribute1_Past#{elseif}($critical_Attribute1!='null')$critical_Attribute1#{elseif}($critical_Attribute1=='null' && $critical_Attribute1_Past=='null')default_value_for_critical_Attribute1#{else}$critical_Attribute1#end"

    },
    "internal": false
}

===========================
You may need to define the safe ‘default_value_for_critical_Attribute1’ for new employees/starters.

With kind regards,
Dimitri

2 Likes

May I ask what day did this happen?

Dear Dimitri,
I got around you test the transform that you proposed, and it works smoothly.
Thank you very much, your contribution is greatly appreciated.
Kind regards,
Irene

1 Like

Dear Renee,
I appreciate your question, may I ask why it’s relevant?
Kind regards,
Irene

It is very similar to an issue that happened in our tenant so just wanted to know when this happened. That’s all.

This looks like an incorrect solution, since there is no order for identity attribute calculation, how are we so sure that this identity attribute transform you have written here within critical_attribute1_past stores the recent value?

Representation:

Identity Profile Mapping:-

Critical_Attribute1 => Mapped to critical Attribute1 of authoritative source.

When the aggregation runs, your crtitical_Attribute1 becomes the latest value, so if you refer this within your transform, it will be a new value not previous non-null value.

I don’t think so.

how are we so sure that this identity attribute transform you have written here within critical_attribute1_past stores the recent value?

Inside a static operation transform, all the variables must be precalculated and not null before a transform passes them to the velocity template with final value [and final value logic conditions like if else set contains foreach etc.]. Therefore, old identity attribute value will be collected before the velocity template rendering. Depends on used simple or complex business logic, you can define the needed, expected and protected target identity attribute value here to assign collected old idendentity value, or new value from source.
FirstValid operation transform operation is just a limited analogue of a well known multi source attribute precedence procedure.

I do not agree with your point here, my point here is the calculation of an Identity Attribute has no order, so if in the next aggregation your critical_attribute1 is calculated first and then your static transform on the other attribute runs, it should not be storing your previous value, it should be storing your new value not the previous value, the explanation of calculation of identityAttribute doesn’t matter when there’s no particular order by which an identityAttribute is calculated.

Hi,
Describe your use case - what is on input, on target, and behaviour you expect to see. It seems we are talking about different scenario.

The scenario here is similar to this one,

We want to know which identity attribute is updated and how to save it’s previous value?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.