This usually happens when the tablespace shows free space, but the DB datafile has already grown to fill the disk. So even if tables show ~67% usage, the physical file may be at 100% due to auto‑extend settings.
I would do quick checks:
Verify if the datafile auto‑extend caused the file to grow until the disk is full.
Check large IIQ tables like spt_audit_event, spt_task_result, or spt_syslog for sudden growth.
Consider purging old audit/task data and increasing disk space or limiting datafile growth.
->Check the current log level settings and ensure that all loggers are set to INFO level instead of DEBUG or TRACE. Excessive logging can significantly increase database and disk usage.
→ Review workflows and verify whether tracing is enabled. If any workflows have trace = true, consider disabling it unless required for debugging, as it can generate a large volume of data.
->Run the Perform Identity Request Maintenance task.
If the retention setting is currently 0(which means data is stored indefinitely), update it to a defined value such as 90 or 180 days , based on your organization’s data retention policy.
->Clean up the audit table by deleting records older than 150–180 days. This table is typically one of the largest contributors.
There are some tables in IIQ whose size grows exponentially like spt_audit_event, spt_syslog, spt_identity_request, spt_identity_request_item, spt_task_result. You can definitely fine tune settings or build few db scripts that help you with data pruning or archival. This depends on your organization compliance policies, please review and accordingly apply the changes. Like in our case we are not suppose to delete any access reuqest and audit records. So for audits, we started doing archival of older data (>1 years) and kept it in compressed format, which we can restore anytime we want.
Check your ProvisioningTransaction pruning settings. You can also ship these records to an external solutions like Kibana for storage and metrics.
In IIQ, the TempDB is heavily utilized for sorting, subqueries, and internal join operations. So, in case your have longer queries (in the form of tasks or rulerunner), then it will increase the tempdb.
Note: Found a fix?Help the community by marking the comment as solution. Feel free to react(,, etc.)with an emoji to show your appreciation or message me directly if your problem requires a deeper dive.
@rishavghoshacc I would recommend checking which tables have most volume of data. Also in your log4j2.properties files, please check syslog level. This could be inflating the database.
Hi @rishavghoshacc doing daily counts of the number of rows in the major tables and charting them over time can give useful insight into database space trends.
If you post the row counts I’ll see if there are any that look excessive.