JDBC Connector: Intermittent error with Oracle DB: java.sql.SQLRecoverableException: IO Error: Connection reset by peer

We have a JDBC Direct Connector set up and configured with a provisioning rule. This is working for the most part, except that we get an intermittent error when provisioning (create, modify, or term.) The error message is as follows:

["java.sql.SQLRecoverableException: IO Error: Connection reset by peer","java.sql.SQLRecoverableException: IO Error: Connection reset by peer"]

The system appears to retry the provisioning and work after a while, so that is a plus.

When looking into this error, most of the results mention that this is related to the system running out of Entropy, and suggest that the solution is to change the random number generator that java is using. This seems drastic for the VA to run out of entropy and no one else on the dev forum running into this issue before (I searched for the error with 0 results)

The other possiblity that was mentioned was that a firewall could be closing the connection instead. I have not looked into this option yet, as the team is not available, but it does have some credence.

Has anyone else encountered this error and resolved it without posting to the forum? If so, what was the solution in your case?

db or not db that is the question.
(sorry for the introducion, but it was stronger than me:sweat_smile:)

Hi @gmilunich,
more than the firewall, I think it is directly the database that closes the connection.

If you have the possibility to interface with the DBA, you can ask to increase the timeout or increase the number of connetion for the service account of your source.

I had a similar problem on IIQ and until the DBA increased the number of parallell connection at 2 and the timeout, I continued to have problems.

I hope I was of help to you

2 Likes

Thanks for the suggestions. Do you know what fields and values the DBA had to increase for the DB? and for clarification, was it an Oracle DB?

Yes, was oracle 18c or 19c, I dont remember.

On this page you can find all:

this is for the last version (24c) but you choose an older too.

the guide says modify the proprieties in glog.sql.update.timeout and glog.sql.query.timeout.

Every depends of the version and the type of the product. Oracle is an entire other world

2 Likes

Ok, thanks for that information. I will see about discussing it with the DBA and see if it can be updated.

If anyone else has suggestions, I’d love to hear them in case this does not resolve the issue.

1 Like

While waiting for the DB App Owners to work with the DBA (They were seeing the same/similar error with another connection of theirs), I looked to see if there was something we could do.

I found some options for JDBC Pool parameters that could be set here: Parameters for JDBC Pooling

The examples are for IIQ (even though this is the ISC Connector Doc) but with some changes, I was able to add them to the connectorAttributes key using the Update Source - Partial API.

Additional details of the configs, but with IIQ focus.

My first test was to Disable Pooling using the “pool.disablePooling” key. I also set the “pool.minEvictIdle” key to “300000” or 5 min, as the default seemed to be 10 minutes, not 10ms as the connector documentation would suggest. I surmised that since the peer/DB was closing the connection before ISC was, that shortening the min time would allow ISN to clean up the threads sooner.

When testing with pooling disabled, we tested 11 creates, and did not get the error once. While this is a seemingly small sample size, previously we were getting the error in >50% of our creates, so it seemed to have worked to some degree.

We then enabled pooling, but keps the shorter minEvictTime, and reduced the maxIdle Threads from 10 to 5. We again tested with 6 creates, and there were no errors again.

So it seems like we may be able to use these parameters to reduce/resolve this issue while we wait for the DB App Owner to work with the DBA team on their end.

Holding off on marking a solution untill that testing happens, but @enistri_devo’s suggestion was helpful in getting closer to a solution.

1 Like

Hi @gmilunich,

I’m happy to have been helpful to you, even if only a little :sweat_smile:

So, the problem is a long persistence of open connections, if I understand

Yeah, what it seems like is that the connection persistence in SailPoint ISC is longer than that of the DB, so the DB Closes the connection first, while it is still open in the pool according to ISC. So when ISC grabs the idle connection from the pool to use, it is no longer active and the exception is thrown because the DB Closed it already.

I have not done extensive testing to prove this 100%, so be aware of that, but that is what logically makes sense to me in this situation from what I am seeing.

2 Likes

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.