JDBC Aggregation Failing

We are constantly seeing this error when aggregating a JDBC application. Dows anyone have any idea regarding this? Is there a way to increase timeout on jdbc connector?

An error occurred while attempting to create task partitions for Application: IO Error: Socket read timed out

HI @ankeetarjyal,

increase timeout could resolve the problem but if have this problem maybe you have something (firewall,LB) between SP and target system, check the network performance too.

Try to add this to connector:

<entry key="ConnectionTimeout" value="30000"/>

Also, could be depends by DB configuration.

1 Like

How long after the aggregation task starts before you get the error?

Hello @enistri_devo ,
Is this value in seconds or miliseconds.

Around 30-40 mins after.

Hi @ankeetarjyal,

Below could be some of the reasons you can cross verify with jdbc app team once. Timeout is in milliseconds.

  1. Network problem : Disruptions or latency in the network can lead to timeouts, especially if the database is not reachable or slow to respond .
  2. Taking time to run the query: If a query takes too long to execute, it may exceed the configured timeout, resulting in a socket read timeout .
  3. Idle connection timeout: Connections that remain idle for too long may be closed by the database or network devices, causing subsequent requests to fail .
  4. Load at database end: High load on the database server can slow down response times, leading to timeouts during peak usage

the values is in miliseconds

same issue even after adding connection time out.

You say this is happening when attempting to “create task partitions”. Some basic questions for you:

  • First, does your test connection SQL work? Normally I either choose a query that counts records in the table, it just has to not fail. Alternately you can query for a user that always will exist in the table.
  • Second, does your getObject query work? It should gather for a single user using something like where username=‘$(identity)’ and that identity is the nativeIdentity field for the connector.
  • Third, does your non-partitioned query work? It should query all users.
  • Finally, how have you constructed your partitioned queries from your non-partitioned query? The connector does not construct the queries for you. You have to do that yourself.

I examine the data in the native identity and use that to construct the partitioned queries. It’s a good idea to get a pattern of the native identity values so you can determine now to divide them up. If they are numeric, you can construct where clauses that use a modulo such as employeeid % 0 through employeeid % 99
If they are alphabetic, you can either use a starts with clause or an ends with clause (using like) such as where username like ‘%a’ through ‘%z’

The key to a good partitioning scheme is to know your data so that the partitions are somewhat close to having the same number of results. Hope this helps.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.