Bug #6764

Acorn: Heap overflow when importing 1GB+ FileDocument

Added by Jussi Koskela over 1 year ago. Updated 5 months ago.

Status:On holdStart date:
Priority:4Due date:
Assignee:Antti Villberg% Done:

0%

Category:-Spent time:-
Target version:-
Release notes:
Tags: db, acorn, OOM
Story points-
Velocity based estimate-

Description

The problem is that org.simantics.db.javacore.lru.FileInfo keeps the whole file in memory.

History

#1 Updated by Tuukka Lehtonen about 1 year ago

  • Tags set to db, acorn, OOM
  • Due date deleted (2016-10-14)
  • Status changed from New to On hold
  • Target version deleted (1.25.0)
  • Start date deleted (2016-10-14)

#2 Updated by Antti Villberg 9 months ago

  • Subject changed from Acorn: Heap overflow when importing 1GB+ FileDocoument to Acorn: Heap overflow when importing 1GB+ FileDocument

#3 Updated by Tuukka Lehtonen 5 months ago

Took a stab at fixing this. Managed to create a partial patch that fixes the problems with FileInfo but after that there are problems with ClusterStreamChunk and possibly with something else after that. With this patch FileInfo no longer stores byte[] arrays internally in memory but reads/writes everything directly from/to disk via BinaryFile.

I suspect that ClusterStreamChunk would need to same kind of treatment.

The problem seems (to me) to be that pushing a large file to the database via WriteGraph.getRandomAccessBinary(ReadGraph) writes the the data in segments as ClusterStreamChunks which still with these changes store all the pushed data in memory as byte[] and Acorns background threads are unable to process the Writable queues fast enough to be able to get data GC'ed.

The draft changes are at https://www.simantics.org:8088/r/#/c/843/. Need to continue these at a later time.

Also available in: Atom PDF