Hi all,
I’m looking to improve how we do testing and regression for IdentityIQ and wanted to hear what people are actually automating in real projects.
Environment: IdentityIQ 8.4 on Tomcat with JDK 11. Today we rely mostly on manual UAT plus a few ad-hoc scripts. I’ve seen mentions of the IIQ Testing Framework, rule unit testing, and some Selenium/API-based approaches, but I’d like to understand what’s really working for you.
Specifically:
-
Scope of automation
What do you currently cover with automated tests? For example:
• Application/connectors (aggregation, delta, provisioning, password reset)
• LCM / lifecycle (joiner, mover, leaver, access requests, approvals)
• Certifications and revocations (incl. closed-loop remediation)
• Policies / SoD violations and remediation
• Custom rules, workflows, plugins
• REST APIs and/or key UI flows -
Tools and frameworks
What are you using in practice (ITF, FitNesse, Selenium/Playwright/Cypress, JUnit with iiq.jar, JMeter, Katalon, etc.)?
Do you integrate any of this into CI/CD (Jenkins, Azure DevOps, GitLab, Ansible, …)? -
Test data and environments
How do you manage test data for repeatable runs (identities, accounts, roles, SoD scenarios)?
Any patterns that help keep tests stable and not overly brittle? -
OOTB vs custom
Where do you lean on OOTB tasks/reports/REST, and where did you find you really needed a custom test harness?
I’m trying to build a practical, maintainable test catalog for IdentityIQ that we can run regularly, not just before major releases. Any concrete examples, lessons learned, or anti-patterns would be very helpful.
Thanks you so much in advance!
Regards,
Mustafa