That page doesn’t exist or is private

Hello
I have written a topic about Memmory Error and now I cannot find it .
I receive this : That page doesn’t exist or is private.
Does anybody know what maybe have happened ?

Did you inadvertently delete the topic?

No !
I see the title of the topic in my posts but when i try to open it and read the replies, I get the following message : That page doesn’t exist or is private :thinking:

I see the same thing.

We don’t delete, remove, or hide topics in this forum, so ¯_(ツ)_/¯

Could I write it again here in order to help me ?

Sure. Maybe read How to ask good questions first.

We use the OpenNms platform with the following characteristics:

cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
stepping : 2
microcode : 0x1a
cpu MHz : 2660.000
cache size : 12288 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm dtherm ida arat
bogomips : 5320.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
stepping : 2
microcode : 0x1a
cpu MHz : 2660.000
cache size : 12288 KB
physical id : 2
siblings : 1
core id : 0
cpu cores : 1
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm dtherm ida arat
bogomips : 5320.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
stepping : 2
microcode : 0x1a
cpu MHz : 2660.000
cache size : 12288 KB
physical id : 4
siblings : 1
core id : 0
cpu cores : 1
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm dtherm ida arat
bogomips : 5320.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:

In addition to the above, we also use 8GB RAM and 180 gb harddisk

The problem is that when we restart OpenNms , receive this message error : HeapDumpOnOutMemmoryError

We are novice at the platform and I will appreciate your help .
Thank you in advance!

That’s not an error message.

The -XX:HeapDumpOnOutOfMemoryError Option
This option tells the Java HotSpot VM to generate a heap dump when an allocation from the Java heap or the permanent generation cannot be satisfied. There is no overhead in running with this option, so it can be useful for production systems where the OutOfMemoryError exception takes a long time to surface.
You can also specify this option at runtime with the MBeans tab in the JConsole utility.
The heap dump is in HPROF binary format, and so it can be analyzed using any tools that can import this format. For example, the jhat tool can be used to do rudimentary analysis of the dump. For more information on the jhat tool, see The jhat Utility.
Example D-1 shows the result of running out of memory with this flag set.
Example D-1 Sample Code for Running Out of Memory
$ java -XX:+HeapDumpOnOutOfMemoryError -mn256m -mx512m ConsumeHeap
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid2262.hprof ...
Heap dump file created [531535128 bytes in 14.691 secs]
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at ConsumeHeap$BigObject.(ConsumeHeap.java:22)
at ConsumeHeap.main(ConsumeHeap.java:32)
The ConsumeHeap fills up the Java heap and runs out of memory. When the java.lang.OutOfMemoryError exception is thrown, a heap dump file is created. In this case the file is 507 MB and is created with the name java_pid2262.hprof in the current directory.
By default the heap dump is created in a file called java_pidpid.hprof in the working directory of the VM, as in the example above. You can specify an alternative file name or directory with the -XX:HeapDumpPath= option. For example -XX:HeapDumpPath=/disk2/dumps will cause the heap dump to be generated in the /disk2/dumps directory.

https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html

Thank you very much for the comprehensible explanation!

Moreover, does the following log mean the same ?
Jul 07 10:33:27opennms[9026]: Caused by: java.lang.OutOfMemoryError: Java heap space
Jul 07 10:33:27 opennms[9026]: java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
Jul 07 10:33:27] opennms[9026]: at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1429)
Jul 07 10:33:27 opennms[9026]: Caused by: java.lang.OutOfMemoryError: Java heap space
Jul 07 10:33:27 opennms[9026]: java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
Jul 07 10:33:27 opennms[9026]: at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1429)
Jul 07 10:33:27 opennms[9026]: Caused by: java.lang.OutOfMemoryError: Java heap space
Jul 07 10:33:27 opennms[9026]: java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
Jul 07 10:33:27 opennms[9026]: at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1429)
Jul 07 10:33:27 opennms[9026]: Caused by: java.lang.OutOfMemoryError: Java heap space

That’s an actual OutOfMemory error. :slight_smile:

Increase the amount of heap from the default:
create a file opennms.conf in $OPENMS_HOME/etc/ that contains a line

JAVA_HEAP_SIZE=someValue

Where someValue is the heap size in megabytes.

Note: the default is 2048 megabytes. (2 gigs)

1 Like

By the way: MemmoryError HeapDumpOnOut while starting OpenNMS

1 Like

Hi! The link : MemmoryError HeapDumpOnOut while starting OpenNMS is my older post and I cannot open it . Could you please share the replies here ?

Thank you in advance !

I’m not sure how it was stuck in “draft” mode, but the post should be published and visible now.

1 Like

hello! I have the same problem with my new topic about health check command:
:Health-check command missing?
I cannot see the answer :unamused