If your org supports it, you can use crobtab:
just search in your fav’ search engine for something like : linux crontab format
I bet the first result returned will do it
('crontab' in Linux with Examples - GeeksforGeeks)
If your org, does not support cron, you may have to follow your org’s Enterprise scheduling process.
Last resort, host a simple webservice app (maybe nodejs) which will kick off the command/aggregation API and call this service from IDNs workflow on a scheduled basis.
Flat File sources are static, hence your need for an external trigger that schedule your aggregations. However, I wonder why instead of trying to configure a cron job you try with a different connector that still supports CSV imports but allows you to schedule the aggregations from within SailPoint.
For example, if you’re able to host your CSV in the cloud and or a server with reachable IP in your VA’s network, you could configure the SQLLoader connector to read the file as “direct connection” which allows you to configure regular schedules.
The con of this approach is that every row you remove from the CSV will delete the account from your source in SailPoint, so your file must always contain the information of all your accounts even if they were already imported in SailPoint. It’s fine for small sources or sources that don’t change often, but when sources grow too fast this approach may quickly become too expensive and you won’t be able to aggregate too often.
Another more efficient alternative would be to build an API Gateway on front of your file repository and retrieve the information via the WebService connector which allows delta aggregations (i.e. you only need to upload new and updated records instead of EVERYTHING).
In either case, you may want to configure some protection layers for your data such as encryption at rest and in transit and have the traffic going through your VAs and they should be protected behind firewalls in your private network, otherwise your data would be exposed to breaches.