This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Summary: | CPU profiling for NB module fails | ||
---|---|---|---|
Product: | profiler | Reporter: | Rashid Urusov <rashid> |
Component: | Base | Assignee: | issues@profiler <issues> |
Status: | RESOLVED DUPLICATE | ||
Severity: | blocker | CC: | issues, jiriprox, jlahoda |
Priority: | P2 | ||
Version: | 6.x | ||
Hardware: | All | ||
OS: | Linux | ||
Issue Type: | DEFECT | Exception Reporter: | |
Attachments: |
Message.log
Memory profiling message.log |
Description
Rashid Urusov
2008-06-26 10:55:56 UTC
Created attachment 63496 [details]
Message.log
Similar behaviour for Memory profiling Created attachment 63497 [details]
Memory profiling message.log
Changed OS -Linux. Ireproduced it on Ubuntu 8.04 Jirko, can you reproduce it? rashid, is /home/tester/.netbeans/dev/var/cache/index/0.8/s17/refs accessible? There are not folder /home/tester/.netbeans/dev/var/cache/index/0.8/s17 on my computer. The last one in /home/tester/.netbeans/dev/var/cache/index/0.8 directory is s12. Seems like a duplicate of issue #129931 - too many opened files. Doesn't seem to be caused by the profiler, it just uses ClassIndex.getElements(ElementHandle, Set, Set) API call which shouldn't throw such exception. Reassigning. The root cause, IMO, is that there are too many opened files - see the first attached messages.log (and issue #129931). I doubt there is anything reasonable that the Java infra could do about this (even if this exception would be suppressed, it is not possible to return correct results, and other parts of the IDE would still suffer from not enough file descriptors. Moreover, this would be only masquerading the real problem in this case.) BTW: given that issue #129931 was waived for NB6.1, I do not think this should be considered a 6.5M1 stopper. Sorry for not reading the log files carefully - you're right, too many open files may have been the root cause of the problem. Marking as duplicate of Issue 129931, increasing the limit of open files should work as a workaround. Anyway, AFAIK the profiler cannot check or control current open files usage, it just uses public API which doesn't describe any dependency on number of opened files. And since any other code could experience exactly the same problem using this API call, I don't think it's a profiler bug. If the java infrastructure uses large queries which could cause such problems, it should somehow solve it or at least mention it in API docs and provide sample solutions how to avoid or process it. Also, are sure that trying to access a non-existing folder which throws an IOException in org.apache.lucene library is the same case as accessing an existing file which is just not available because too many files are already open? Just an idea not to mask some other problem... *** This issue has been marked as a duplicate of 129931 *** I have added an analysis to issue #129931. >Anyway, AFAIK the profiler cannot check or control current open files usage, it just uses public API which doesn't >describe any dependency on number of opened files. And since any other code could experience exactly the same problem >using this API call, I don't think it's a profiler bug. Not sure which API call you mean - the profiler seems to be trying to keep 978 files open at the same time (see my comment in issue #129331). I do not think this is reasonable. >If the java infrastructure uses large queries which could cause such problems, it should somehow solve it or at least >mention it in API docs and provide sample solutions how to avoid or process it. It is true that the Java infra is keeping quite a few files open (~288 in my development IDE), but I am not aware this would be causing any problems in any case, except profiling. >Also, are sure that trying to access a non-existing folder which throws an IOException in org.apache.lucene library is >the same case as accessing an existing file which is just not available because too many files are already open? Just >an idea not to mask some other problem... Likely, the cache directory was not created due to the Too many opened files problem. We could throw an exception sooner, but this would only help to identify the problem sooner, not to solve it, IMO. Oops, the second reference to an issue should be: issue #129931. |