Acorn: Heap overflow when importing 1GB+ FileDocument
|Status:||On hold||Start date:||2018-01-08|
|Assignee:||Antti Villberg||% Done:|
|Tags:||db, acorn, OOM|
|Velocity based estimate||-|
The problem is that org.simantics.db.javacore.lru.FileInfo keeps the whole file in memory.
#3 Updated by Tuukka Lehtonen 7 months ago
Took a stab at fixing this. Managed to create a partial patch that fixes the problems with FileInfo but after that there are problems with ClusterStreamChunk and possibly with something else after that. With this patch FileInfo no longer stores byte arrays internally in memory but reads/writes everything directly from/to disk via BinaryFile.
I suspect that ClusterStreamChunk would need to same kind of treatment.
The problem seems (to me) to be that pushing a large file to the database via
WriteGraph.getRandomAccessBinary(ReadGraph) writes the the data in segments as ClusterStreamChunks which still with these changes store all the pushed data in memory as byte and Acorns background threads are unable to process the Writable queues fast enough to be able to get data GC'ed.
The draft changes are at https://www.simantics.org:8088/r/#/c/843/. Need to continue these at a later time.