How much optimisation should one put in for slow integration test setups
I have some very slow integration tests that use Selenium and require a lot of database setup. The setup and tear down times are in the order of tens of seconds while the test bodies only take a few seconds each. Since there are hundreds of these tests, this adds up to a lot. My previous place has an elaborated resource pooling/reuse system that identifies which browser window/database tables are in a "good" state and reuses them instead of re-creating. This cuts setup/tear down time to nearly 0 per test (when no test is failing)! I have spent a few days doing a simplified version of the Selenium browser reuse code and that has produced very good results. However, the database maintenance code would be much harder (e.g. my old place uses Hibernate to recreate the schema, which I cannot use here). I estimate it would take a week of work at least, but since the time saved is only in CI, I find it difficult to get my manager on board. I believe the faster turn-around would help day-to-day test development as well, but no one seems to care. I am wondering how much time do people normally spent on slow integration test optimisation, especially in small teams (3-4 people). And how do people justify an involved optimisation like above? Edit: A bit more details about the costs: the integration tests run on shared Jenkins boxes overnight. They take 1-2 hours depending on how many tests fail and how busy the shared database box is. We use them mainly for regression testing. Them passing is a requirement for release sign off; however, management allow minor fixes/hotfixes to be released with only the faster unit and non-Selenium integration tests. I know two cases in the past year that bugs that would have been caught by the Selenium tests were released. I was hoping I can eliminate this by making the tests to finish within a release timeframe. Also, a bit of development time can be saved as each manual run of a particular test will be faster. But a back-of-the-envelope calculation shows time spent waiting for tests to run accounts of at most 1% of development time, so probably not worth optimising any further...

I have some very slow integration tests that use Selenium and require a lot of database setup. The setup and tear down times are in the order of tens of seconds while the test bodies only take a few seconds each. Since there are hundreds of these tests, this adds up to a lot.
My previous place has an elaborated resource pooling/reuse system that identifies which browser window/database tables are in a "good" state and reuses them instead of re-creating. This cuts setup/tear down time to nearly 0 per test (when no test is failing)!
I have spent a few days doing a simplified version of the Selenium browser reuse code and that has produced very good results. However, the database maintenance code would be much harder (e.g. my old place uses Hibernate to recreate the schema, which I cannot use here). I estimate it would take a week of work at least, but since the time saved is only in CI, I find it difficult to get my manager on board. I believe the faster turn-around would help day-to-day test development as well, but no one seems to care.
I am wondering how much time do people normally spent on slow integration test optimisation, especially in small teams (3-4 people). And how do people justify an involved optimisation like above?
Edit:
A bit more details about the costs: the integration tests run on shared Jenkins boxes overnight. They take 1-2 hours depending on how many tests fail and how busy the shared database box is.
We use them mainly for regression testing. Them passing is a requirement for release sign off; however, management allow minor fixes/hotfixes to be released with only the faster unit and non-Selenium integration tests.
I know two cases in the past year that bugs that would have been caught by the Selenium tests were released. I was hoping I can eliminate this by making the tests to finish within a release timeframe.
Also, a bit of development time can be saved as each manual run of a particular test will be faster. But a back-of-the-envelope calculation shows time spent waiting for tests to run accounts of at most 1% of development time, so probably not worth optimising any further...