IDW Rule Runner Plugin: Rapid Development and Troubleshooting in the Browser

Additional Resources


Here is the link to the Rule Runner public repository:

My prototyped customization rule:

// === this is for development remove when complete
import sailpoint.connector.ConnectorFactory;
import sailpoint.connector.Connector;
import sailpoint.object.ResourceObject;

ConnectorFactory cf = new ConnectorFactory();
Connector ac = cf.getConnector(application, null);
// === end of block


  // Verify no issues with the Application configuration
  // If this application has a large number of accounts you can decache through the loop and check for edge cases 
  // If you have one user in mind you can always change this to something like a getObject
  CloseableIterator<ResourceObject> ro = ac.iterateObjects("account", null, null);
    ResourceObject object =;
    // Lets start by seeing what data we have to work with
    // This also gives you a chance to check for those pesky edge cases before going live 
    // if(Util.isNullOrEmpty(object.getAttribute("id")) || Util.isNullOrEmpty(object.getAttribute("username")) || object.getAttribute("username").length() < 3){
    //    log.error("null value found or username length too short!");
    //    log.error(object.getAttributes());
    //    object.setAttribute("id","100");
    // }
    // Ok that's what we are looking for let's finish this out 
    // log.debug(object.getAttribute("username").substring(0,3) + object.getAttribute("id"));
    // object.setAttribute("id",object.getAttribute("username").substring(0,3) + object.getAttribute("id"));
    // log.debug(object.getAttributes());
    // This looks great! let's return the object and verify
    //return object;
} catch (Exception e){

My clean stale workflow case rule:

import sailpoint.object.*;
import sailpoint.api.Terminator;
import java.util.*;
import java.util.concurrent.TimeUnit;

// Demoing the stop function

  // Set this to similar name as object to be deleted
  String workflowCaseName = "Dead WorkflowCase";

  // Set this Integer to the days how long workflowCases should be kept. ex.: 1 = delete all objects older than 1 day
  int daysGapToDelete = 30;

  int countDeletedWorkflows = 0;
  Terminator term = new Terminator(context);
  QueryOptions qo = new QueryOptions();

  // Calculate cutoff date
  Calendar cal = new GregorianCalendar();
  cal.add(Calendar.DATE, -daysGapToDelete);
  Date cutOffDate = cal.getTime();

  //Get WorkflowCases matching name critera and before cutoff date
  qo.addFilter("created", cutOffDate ));
  qo.addFilter("name", workflowCaseName, Filter.MatchMode.START));

  Iterator it = term.getIds(WorkflowCase.class, qo);

  int limiter=0;
  String listOfWfCase = "";

  while(it.hasNext()){"Found workflowcases to delete");
    String id = (String);"Found id: " + id);
    WorkflowCase wfCase = context.getObject(WorkflowCase.class, id);"Got wfCase:" + wfCase);

    // Pull WorkflowCase into termination list of TaskResult is null, this is the cause of the PPM errors
    if(wfCase != null){
      if(wfCase.getTaskResult() == null){
        listOfWfCase = listOfWfCase + "Found WorkflowCase for termination " + wfCase.getId() + " - " + wfCase.getName() + " - " + wfCase.getTaskResultId() + "\n";
        try {
        } catch (Exception e) {
"Got exception deleting workflowcase: " + e.toString());
          return wfCase;
        // Adjust this value for performance adjustments
        if(limiter % 200 == 0){
      } else {"wfCase.getTaskResult(): " + wfCase.getTaskResult());
    } else {"wfCase is null");
  }"finished loop with limiter:" + limiter);

  if(listOfWfCase != ""){
    return listOfWfCase +"Total WorkflowCases deleted: " + countDeletedWorkflows;
    return "No matching WorkflowCases found";
catch (Exception e) {

A rule I use to test connection on all applications in an environment from rule runner:

import sailpoint.object.*;
import sailpoint.connector.Connector;
import sailpoint.connector.ConnectorFactory;

  QueryOptions qo = new QueryOptions();
  List applicationList = context.getObjects(Application.class, qo);
  String resultString = "";

  for(Application application : applicationList){
      Connector connector = ConnectorFactory.getConnector(application, null);
      log.debug("Test connection successful on: " + application.getName());
      resultString = resultString + "[OK] Test connection successful on: " + application.getName() + "\n";
    catch(Exception e){
      log.debug("Test connection failed on: " + application.getName());
      resultString = resultString + "[FAIL] Test connection failed on: " + application.getName() + " || " + e + "\n";
  return resultString;
catch(Exception e){
  log.error("Error when retrieving the applicaiton list: " + e);

My search object contents rule:

/* The premise of this rule is to search the codebase currently live in a SailPoint environment to find code that can't be found in the SailPoint repo or database for whatever reason. Simply change the searchTerm and iteration variables to search for any rules containing the searchTerm. The object searched for can be changed from rule by adjusting the class defined in the context.getObjects call to whatever you are looking for (ex. Workflow.class). */

import sailpoint.object.*;

//Set this to limit the number of objects searched for performance purposes
int iteration = 500;

//Set this to the term you are searching for in the live codebase 
String searchTerm = "plan.toXml";

QueryOptions qo = new QueryOptions();
String searched = "";
String foundInstances = "";

  //Make sure to change the object class if you'd like to search something else like workflows
  List ruleList = context.getObjects(Rule.class, qo);

  log.debug(ruleList.size() + " objects ready to search...");
  for(Rule rule : ruleList){
      //log.debug below is good when running in rule runner
      //log.debug( "Found it in: " + rule.getName());
      foundInstances = foundInstances + "***Found in: " + rule.getName() + "***\n";
      int lineNumber = 0;
      for(String line : rule.toXml().split("\n")){
          foundInstances = foundInstances + "Line " + lineNumber + ": "+ line.trim() + "\n\n";
      searched = searched + "Not found in: " + rule.getName() + "\n";
    if(iteration == 0){
      return foundInstances + searched;

  return foundInstances + searched;
} catch(Exception e){
  log.error("Failed to search for: " + searchTerm + " || Exception: " + e);

A rule I’ve used to create cloned identities and the rule I use to clean them up:

import sailpoint.object.Identity;
    import sailpoint.object.Capability;

    Identity provisionWorkgroup(){
        import sailpoint.object.Identity;
        Identity clone_workgroup = context.getObjectByName(Identity.class, "IDW Cloned Identities");
        Identity spadmin = context.getObjectByName(Identity.class, "spadmin");

        if(clone_workgroup == null){
            log.debug("Workgroup does not exist, begining provisioning process...");
            clone_workgroup = new Identity();
            clone_workgroup.setName("IDW Cloned Identities");
            clone_workgroup.setDisplayName("IDW Cloned Identities");
            clone_workgroup.setDescription("Workgroup used to easily manage cloned Identities from the IDW Clone Identity function. *This is an auto provisioned workgroup*");
            log.debug("Committing clone workgroup");

            log.debug("Workgroup already exists...");

        return clone_workgroup;

    //Use the top to provision 1 identity and the bottom for use with a retrieval script in a multi-threaded rule
    // Identity original_identity = context.getObjectByName(Identity.class, "zac_test");
    Identity original_identity = object;

    Identity workgroup = provisionWorkgroup();
    Identity identity = new Identity(); 
    //String name = "dynamic identity2";

    String name = original_identity.getName() + "_cloned";
    String password = "password1";

    log.debug("Creating Identity with the name: " + name);

    identity.setName(name.replace(" ", ""));


    for(Capability cap : original_identity.getCapabilities()){

    for(Identity wg : original_identity.getWorkgroups()){


    log.debug("Committing Identity");


    log.debug("Successfully created Identity!");
import sailpoint.object.Identity;
import sailpoint.api.ObjectUtil;
import sailpoint.api.Terminator;

Identity clone_workgroup = context.getObjectByName(Identity.class, "IDW Cloned Identities");
Identity test_identity = context.getObjectByName(Identity.class, "zac_test_cloned");
List<String> deletedIdentitiesList = new ArrayList<String>(); 
Terminator terminator = new Terminator(context);

if(clone_workgroup != null){ 
    Iterator members = ObjectUtil.getWorkgroupMembers(context, clone_workgroup, null);
        Object[] object = (Object[]);
        Identity clonedIdentityToDelete = (Identity) object[0];
            log.debug(clonedIdentityToDelete.getName() + " is a protected identity, removing protection status...");
        log.debug("Deleting cloned Identity: " + clonedIdentityToDelete.getName());

return deletedIdentitiesList;

Hi @zac_adams_iid,
Thanks for sharing the rule to cleanup stale workflow cases. Just wanted to understand what is the impact of there are workflow cases stuck and having these data more than 3-4 months ?
What performance issues we face ?
The perform maintenance job timings will be impacted if we have stale workflow cases ?

Well usually they don’t cause too much of a performance impact as they are pushed along by the perform maintenance task and aren’t something like a rule just running on a thread in the background. However this is all dependent on what your parent workflow is doing! For example, maybe since a workflow case never finishes, a new one is created in an endless loop. There is also an impact in usability as in some cases you’ll see all these old workflow cases in your task results page making it impossible to find what your looking for. Finally you have to remember workflow cases are temporary objects and shouldn’t exist in your DB after their purpose has been filled. You’d be doing your DB teams a favor keeping your tables clean! Hope that answers your question!

1 Like

Thanks @zac_adams_iid for addressing the query !!

1 Like

Hi @zac_adams_iid ,
This looks great.
I am trying to get this plugin installed in my lab, however, it fails with “Request Entity Too Large” message when I try uploading the zip file. Any pointers to how to fix this ?

Hey @mike_black! It sounds like your Tomcat upload limits are still set too low. Try taking a look at maxHttpHeaderSize and maxPostSize in $TOMCAT_HOME/conf/server.xml on your server and bump them up.

Hi @mike_black I had the same issue in my lab environment. My issue was related the nginx-proxy I am using in front of TomCat.

My solution was to import the plugin from the IIQ-console:|_____4

– Remold

Hi @zac_adams_iid

Amazing plugin, I’m making great use of it!

I’ve had one issue regarding logging, where periodically aren’t displayed in the ‘Logs’ section.

If I create a new blank rule with just log.debug(“Test”); it works fine, however if I then load a rule with multiple log commands in then nothing is returned. The rule runs and provides the expected output, just no logs. Even tried adding log.debug(“Test”); on the first line of the opened rule but to no avail.

Any thoughts on how to resolve this would be greatly appreciated, thanks!

In that case, it sounds like you may be leveraging a rule library with its own declaration of “log”. This will supersede the log object passed in by the rule runner plugin and prevent the plugin from capturing those log outputs.

Thanks Zac, that was exactly it, I was referencing a rule library that was declaring log.

Is there a way to override that at all? Possibly by declaring log in the source rule that could override the referenced rule?