0

i'm facing problem with topology i ran it in local mode using this command

mvn compile exec:java -Dexec.classpathScope=compile -Dexec.mainClass=trident.MyTopology 

and got

Async loop died!java.lang.OutOfMemoryError: GC overhead limit exceeded

Can you help on this ? if there is any data you need for helping just tell me

I think storm.yaml is not important here beacuse this error in local not poduction or i'm wrong ?

 Selection  
  0            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      auto mode
* 1            /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java   1061      manual mode
  2            /usr/lib/jvm/java-6-oracle/jre/bin/java          1062      manual mode
  3            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      manual mode

Thanks in advance

  • Possible duplicate of [java.lang.OutOfMemoryError: GC overhead limit exceeded](http://stackoverflow.com/questions/5839359/java-lang-outofmemoryerror-gc-overhead-limit-exceeded) – A_Di-Matteo Apr 02 '16 at 22:47
  • thanks but how can i check the memory size i have for java , and he mentioned that "Work with smaller batches of HashMap objects to process at once if possible" what is that mean ? Thanks again –  Apr 02 '16 at 22:50

2 Answers2

0

Looks like Maven needs more heap space. It might be due to an issue in Maven, or maybe your build is just way too big for the default parameters. Either way, you can try adding more heap space with:

export MAVEN_OPTS="-Xmx3000m"
mvn compile exec:java -Dexec.classpathScope=compile -Dexec.mainClass=trident.MyTopology 

That will give Maven ~3GB. You can try more or less to see what works best for your case.

There are more details and ideas in: Maven Out of Memory Build Failure

Community
  • 1
  • 1
kichik
  • 28,340
  • 4
  • 77
  • 97
  • thanks for replying , i got this line before message of GC " WARN org.apache.storm.zookeeper.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 1220ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide" is that means i should use your solution too or it's different problem ? –  Apr 03 '16 at 14:15
  • That might indicate you are still having GC issues. If you have it 3 GB you might have a memory leak here. – kichik Apr 03 '16 at 19:14
0

It's hard to guess what's wrong with topology. Exception tells you that GC runs to frequently, you can try to increase worker heap ("worker.childopts" setting). Other solution is to limit number of tuples your topology working with at the same time. ("topology.max.spout.pending" setting) But this will work only if your topology support acking. Can you provide more information about topology you are runing and "worker.childopts", "topology.max.spout.pending" settings you are using?

f1sherox
  • 319
  • 2
  • 13
  • Thanks for replying , my worker.childopts: "-Xmx4048m -XX:MaxPermSize=256m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:/usr/local/storm/logs/gc-storm-worker-%6700%.log" and "topology.max.spout.pending is 50000000 –  Apr 05 '16 at 22:12
  • topology.max.spout.pending 50,000,000 - that's too much. It means your topology works with 50kk tuples at the same time and tracks them all, if you enabled acknoledge. I guess thats the reason of your OOM exception. Try to start with smaller numbers (10,000 or even 1,000) and see what happens. – f1sherox Apr 05 '16 at 22:39
  • thanks for replying . ok i'll try that , but this project worked with me well with this configurations and got the result which is a round 11 GB but i updated code , i expected to get more data should i use another configuration or what ? sorry for this question too , what is the useful of increasing or decreasing spout.pending ? or what is their impact on the result ? –  Apr 05 '16 at 22:47
  • Less max.spout.pending - less memory you need, it's for sure. Changing this parameter not always affect perfomance of your topology. At some point increasing pending tuples won't get any positive affect. Where is that point You can know only by testing. So, try to start with small numbers. – f1sherox Apr 05 '16 at 23:14
  • many thanks i'm trying now 10,000 and will see if it work well or not , but i want t make sure that there is will not affect on the performance , but when can i use more max.spout.pending ? –  Apr 05 '16 at 23:20
  • As I said, at some point increasing pending tuples won't affect your topology perfomance at all, except you will need more memory to store all this tuples. Find optimal value with testing. – f1sherox Apr 05 '16 at 23:33
  • i tried 10,000 and same error . i want to mention that i ran 50,000,000 before sucessfully but now i changed in the code and want to run it with 50.000.000 too –  Apr 06 '16 at 05:48