Cannot download Reports as its too large

Which IIQ version are you inquiring about?

8.4p1

We have a few reports that are very large and we are unable to export the data into a CSV or PDF. Whenever we try, we get a 500 error. Is there a way we can get this information into a CSV or do something different to prevent the timeout?

If you are very desperate, you can extract the CSV and/or PDF files from the database. The files are stored in chunks in FileBuckets. The TaskResult references a JasperResult object, which references a PersistedFile. FileBuckets have a parent PersistedFile.

To give you an idea of how it’s all stored, here’s a query (Oracle SQL) to get a list of the FileBuckets associated with TaskResults from the database, and to give the length of the data in the bucket.

select 
    tr.name
    , jr.id as jr_id
    , jf.idx as jasper_file_idx
    , pf.name
    , pf.content_type
    , pf.content_length
    , fb.id
    , fb.file_index
    , length(fb.data)
from spt_task_result tr
join spt_jasper_result jr on jr.id = tr.report
join spt_jasper_files jf on jf.result = jr.id
join spt_persisted_file pf on pf.id = jf.elt
join spt_file_bucket fb on fb.parent_id = pf.id
where report is not null
order by tr.created, jf.idx, pf.name, pf.id, fb.file_index;

If you want to stick to BeanShell, here’s some sample code that will get some arbitrary FileBucket’s data.

 { // Global variables are really bad in BeanShell because they stick around
   // after your Rule has finished running! Wrap everything in a block so that
   // nothing is global, and the problem is "solved".

    import sailpoint.object.*;
    import sailpoint.tools.Util;
    
    //String myReportResultName = "something";
    List results = new LinkedList();

    QueryOptions qo = new QueryOptions();

    // Search by TaskResult name
    //qo.addFilter(Filter.eq("name", myReportResultName));

    // Join the FileBuckets
    qo.addFilter(Filter.join("report.files.id", "FileBucket.parent.id"));

    // Just grab the first row found for demonstration purposes
    qo.setResultLimit(1);
    Iterator it;
    try {
      it = context.search(TaskResult.class, qo, "FileBucket.data" /* projection query to get just the data */);
      while (it.hasNext()) {
        results.add(it.next()[0]);
      }
    } finally {
      if (null != it) {
        Util.flushIterator(it);
      }
    }

    return results;
}

If you run this from the debug page, the result would look like this:

<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE List PUBLIC "sailpoint.dtd" "sailpoint.dtd">
<List>
  <ByteArray><![CDATA[base64-encoded bytes here]]></ByteArray>
</List>

You could rewrite this example to narrow the search to get just the reports you want and to order the results by report.files.id and then FileBucket.fileIndex. Then you can change the context.search to retrieve report.files.id, report.files.name. That way, you can stream the results to disk.

Something like…

  qo.addOrdering("report.files.id", true);
  qo.addOrdering("FileBucket.fileIndex", true);

  String currentFileId = null;
  OutputStream currentOutputStream = null;
  Iterator it = null;
  try {
    it = context.search(TaskResult.class, qo,
                        "report.files.id, report.files.name, FileBucket.data");
    while (it.hasNext()) {
      Object[] row = it.next();
      if (!row[0].equals(currentFileId)) {
        // This chunk is for a different file! Since we ordered
        // the results by file, we need to close the
        // current file (if any) and open a new one.
        if (null != currentOutputStream) {
          currentOutputStream.close();
        }

        // This might be a bad idea. report.files.name *should* be safe
        // to use as a file name, but is it *really* a good idea to
        // rely on that?
        currentOutputStream = new FileOutputStream(row[1]); 
      }
      // write the chunk to the file
      currentOutputStream.write(row[2]);
    }
  } finally {
    try {
      if (null != currentOutputStream) {
        currentOutputStream.close();
      }
    } finally {
      if (null != it) {
        Util.flushIterator(it);
      }
    }
  }