diff --git a/README.md b/README.md index 3ea2dbb..0b47547 100644 --- a/README.md +++ b/README.md @@ -1,44 +1,80 @@ -# Dtleakanalyzer +# DtLeakAnalyzer A tool for supporting identifying memory leaks with dtrace ### Overview This tool is intended to be used in supporting memory leak investigations with dtrace. It consists of -- a set of D scripts (that attach to a process and start tracing) -- a Java program that analyzes the traces and produces a report - -It re-uses the .d script defined in https://blogs.oracle.com/openomics/investigating-memory-leaks-with-dtrace moving most of the trace analysis to the newly created Java post-processing program. Moreover it tries to detect wrong delete operations (i.e. delete on memory that was allocated with new[]) +- a **set of D scripts** (that attach to a process and start tracing) +- a **Java program that analyzes the traces** and produces a report + +Overall there are **4 basic modes of usage**: +- **Single memory allocator tracing session**, where we collect once traces for a process and analyze them +- **Combination of multiple memory allocator tracing sessions**, where we combine multiple tracing sessions of different time lengths +- **Combination of multiple short-term and long-term traces memory allocator** (There are different d-scripts for the short-term and long-term traces. The short-term traces are used to train the system in order to interpret the long term traces) +- **Process memory growth analysis** + +In summary it provides the following features +- **Single memory allocator tracing session** +-- presentation of call stacks that appear to be causing memory leaks +-- heuristic analysis for pointing out strongly suspected memory leaks +-- presentation of call stacks that appear to be freeing memory wrongly +-- heuristic analysis for pointing out stringly suspected wrong free call stacks +-- identification of double free operations +-- presentation of a combined call stack where the potential memory leaks are identified +- **Combination of multiple memory allocator tracing sessions** (on top of the features above) +-- combined presentation of the occurence of suspected call stacks for all trace files +-- heuristic analysis for pointing out very strongly suspected call stacks +- **Combination of multiple short-term and long-term traces** (on top of the features above) +-- heuristic analysis of long term traces +- **Process memory growth analysis** +-- presentation of call stacks that caused memory growth, including occurences and total size +-- presentation of a combined call stack where all calls that caused memory growth are presented + +More information on the usage and capabilities of DtLeakAnalyzer can be found [in the DtLeakAnalyzer usage manual](resources/DtLeakAnalyzer.pdf) ### Running -First we collect traces from the running process +Detailed instructions can be found in [in the DtLeakAnalyzer usage manual](resources/DtLeakAnalyzer.pdf) + +As a **single memory allocator tracing session** example, first we collect traces from the running process i.e. ``` -> trace_new_delete.d > traces.txt +> ./trace-memalloc.d 14291 > trace-memalloc.log ``` (press ctrl^C when we are done) -then we demangle names in the call stack (this is an optional step to show the call stacks of your program clearly) + +then we tun the trace analysis program ``` -> c++filt traces.txt > traces.dem.txt +> java -jar dtleakanalyzer.jar -f memalloc trace-memalloc.log trace-memalloc.log.report ``` -then we tun the trace analysis program +The output of the trace analysis tool for this case is: ``` -> java -jar dtleakanalyzer.jar traces.dem.txt traces.dem.txt.report -Output: -Trace analyser started on Mon Oct 08 13:40:45 CEST 2018 - - -Processing wrong deletes [delete on memory that was allocated with new[]), found 0 instances -found 0 unique wrong delete stacks -Analyzing 2290 potential memory leaks -Processing completed. -Detected 108 potential memory leaks +Started memory allocator analysis for file memalloc trace-memalloc.log on:Thu Nov 08 08:03:54 CET 2018 +Finished memory allocator analysis for file memalloc trace-memalloc.log on:Thu Nov 08 08:04:05 CET 2018 +Call statistics +Found 142511 malloc calls +Found 0 calloc calls +Found 0 realloc calls +Found 142427 free calls + +Double free issues +Found 0 double free stacks in total + +Free non-allocated memory issues (may also be potential memory leaks) +Found 2617 stacks that freed memory that was not allocated during the period of the trace +Found 17 unique stacks that freed memory that was not allocated during the period of the trace +Found 495 unique stacks that correctly freed memory +Found 0 unique stacks that have never been found to correctly free memory + +Memory leak issues +Found 2701 potential memory leaks in total +Found 44 unique potential memory leak stacks (suspects) +Found 553 unique stacks that allocated memory that was correctly freed +Found 1 unique stacks that were never correctly deleted/freed (strong suspects) -finished on Mon Oct 08 13:40:47 CEST 2018 ``` -This will produce a report (traces.dem.txt.report) which contains all useful information and potential memory leaks. - +The report will be procuded in the specified file: trace-memalloc.log.report and will contain all relevant information about the identified call stacks and heuristics. ### Compiling @@ -50,30 +86,59 @@ Extract the reporitory and run the compile.bat or .sh ``` +c:\dev\projects\DTLeakAnalyzer>compile.bat + c:\dev\projects\DTLeakAnalyzer>javac -d classes src/DTLeakAnalyzer.java c:\dev\projects\DTLeakAnalyzer>jar cvfm dtleakanalyzer.jar resources/manifest.txt -C classes . added manifest -adding: DTLeakAnalyzer$1.class(in = 894) (out= 465)(deflated 47%) -adding: DTLeakAnalyzer$2.class(in = 839) (out= 451)(deflated 46%) -adding: DTLeakAnalyzer$3.class(in = 793) (out= 479)(deflated 39%) -adding: DTLeakAnalyzer$DTLeakLogEntry.class(in = 3509) (out= 1811)(deflated 48%) -adding: DTLeakAnalyzer$DTLeakLogEntryType.class(in = 1112) (out= 582)(deflated 47%) -adding: DTLeakAnalyzer$DTLeakReportEntry.class(in = 703) (out= 431)(deflated 38%) -adding: DTLeakAnalyzer$DTLeakWrongDeleteEntry.class(in = 806) (out= 412)(deflated 48%) -adding: DTLeakAnalyzer$DTLeakWrongDeleteReportEntry.class(in = 1020) (out= 506)(deflated 50%) -adding: DTLeakAnalyzer.class(in = 7823) (out= 3849)(deflated 50%) +adding: classes.txt(in = 0) (out= 0)(stored 0%) +adding: DTLeakAnalyzer$1.class(in = 561) (out= 380)(deflated 32%) +adding: DTLeakAnalyzer$10.class(in = 1475) (out= 709)(deflated 51%) +adding: DTLeakAnalyzer$11.class(in = 795) (out= 458)(deflated 42%) +adding: DTLeakAnalyzer$12.class(in = 799) (out= 461)(deflated 42%) +adding: DTLeakAnalyzer$13.class(in = 806) (out= 467)(deflated 42%) +adding: DTLeakAnalyzer$14.class(in = 1991) (out= 872)(deflated 56%) +adding: DTLeakAnalyzer$2.class(in = 561) (out= 380)(deflated 32%) +adding: DTLeakAnalyzer$3.class(in = 561) (out= 376)(deflated 32%) +adding: DTLeakAnalyzer$4.class(in = 838) (out= 458)(deflated 45%) +adding: DTLeakAnalyzer$5.class(in = 838) (out= 463)(deflated 44%) +adding: DTLeakAnalyzer$6.class(in = 838) (out= 459)(deflated 45%) +adding: DTLeakAnalyzer$7.class(in = 842) (out= 456)(deflated 45%) +adding: DTLeakAnalyzer$8.class(in = 806) (out= 463)(deflated 42%) +adding: DTLeakAnalyzer$9.class(in = 806) (out= 464)(deflated 42%) +adding: DTLeakAnalyzer$BrkStackOccurence.class(in = 1024) (out= 568)(deflated 44%) +adding: DTLeakAnalyzer$BrkTraceEntry.class(in = 3561) (out= 1856)(deflated 47%) +adding: DTLeakAnalyzer$BrkTraceEntryType.class(in = 990) (out= 523)(deflated 47%) +adding: DTLeakAnalyzer$DTGenericLeakLogEntry.class(in = 3648) (out= 1850)(deflated 49%) +adding: DTLeakAnalyzer$DTLeakAnalyzerFileType.class(in = 1029) (out= 519)(deflated 49%) +adding: DTLeakAnalyzer$DTLeakBrkLogEntry.class(in = 3586) (out= 1841)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakBrkReportEntry.class(in = 998) (out= 548)(deflated 45%) +adding: DTLeakAnalyzer$DTLeakCppLogEntry.class(in = 3531) (out= 1830)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakGenLogEntry.class(in = 3640) (out= 1873)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakLogBrkEntryType.class(in = 1018) (out= 521)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakLogCppEntryType.class(in = 1133) (out= 586)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakLogEntry.class(in = 3510) (out= 1828)(deflated 47%) +adding: DTLeakAnalyzer$DTLeakLogEntryType.class(in = 1112) (out= 586)(deflated 47%) +adding: DTLeakAnalyzer$DTLeakLogGenEntryType.class(in = 1128) (out= 580)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakMemAllocLogEntry.class(in = 3649) (out= 1866)(deflated 48%) +adding: DTLeakAnalyzer$DTLeakReportEntry.class(in = 799) (out= 454)(deflated 43%) +adding: DTLeakAnalyzer$DTLeakWrongDeleteEntry.class(in = 824) (out= 414)(deflated 49%) +adding: DTLeakAnalyzer$DTLeakWrongDeleteReportEntry.class(in = 1020) (out= 510)(deflated 50%) +adding: DTLeakAnalyzer$MemoryAllocationTraceEntryType.class(in = 1191) (out= 596)(deflated 49%) +adding: DTLeakAnalyzer$MemoryAllocatorTraceEntry.class(in = 3697) (out= 1873)(deflated 49%) +adding: DTLeakAnalyzer$StackOccurence.class(in = 1142) (out= 583)(deflated 48%) +adding: DTLeakAnalyzer$TraceFileType.class(in = 966) (out= 521)(deflated 46%) +adding: DTLeakAnalyzer.class(in = 36673) (out= 15321)(deflated 58%) ``` -This will create the dtleakanalyzer.jar on the current folder. +This will create the dtleakanalyzer.jar executable jar on the current folder. ## Contributing - - + Please feel free to extend the project! - ## License @@ -88,6 +153,6 @@ This project is licensed under the MIT License (https://opensource.org/licenses/ ## Acknowledgments - -* Thanks to the authors of the original article on how to use dtrace for supporting memory leak investigations : https://blogs.oracle.com/openomics/investigating-memory-leaks-with-dtrace moving +* Thanks to Brendan Gregg for his excellent page on memory leaks and ways to identify them: http://www.brendangregg.com/Solaris/memoryflamegraphs.html +* Thanks to the authors of this article on how to use dtrace for supporting memory leak investigations : https://blogs.oracle.com/openomics/investigating-memory-leaks-with-dtrace \ No newline at end of file diff --git a/resources/Analyzing reports with dtleakanalyzer.pdf b/resources/Analyzing reports with dtleakanalyzer.pdf deleted file mode 100644 index 47132c4..0000000 Binary files a/resources/Analyzing reports with dtleakanalyzer.pdf and /dev/null differ diff --git a/resources/D scripts/execute-memalloc-loop b/resources/D scripts/execute-memalloc-loop new file mode 100644 index 0000000..60f081e --- /dev/null +++ b/resources/D scripts/execute-memalloc-loop @@ -0,0 +1,15 @@ +for REP in 1 2 3 4 5; do + for SLTIME in 10 20 40 ; do + echo tracing process:$1 for :$SLTIME seconds. Repetition:$REP + + ./trace-memalloc.d $1 > ./trace-memalloc.$SLTIME.$REP & + dtracepid=$! + + echo started dtrace process, pid:$dtracepid + sleep $SLTIME + kill -SIGINT $dtracepid + echo sent interrupt signal to dtrace process + wait $dtracepid + echo dtrace finished + done +done diff --git a/resources/D scripts/execute-memalloc-loop-proc b/resources/D scripts/execute-memalloc-loop-proc new file mode 100644 index 0000000..34da88e --- /dev/null +++ b/resources/D scripts/execute-memalloc-loop-proc @@ -0,0 +1,15 @@ +for REP in 1 2 3 4 5; do + for SLTIME in 0010 0020 0040 0080 0160 0320 0640 1280 2560; do + echo tracing process:$1 for :$SLTIME seconds. Repetition:$REP + + ./trace-memalloc-proc.d $1 > ./trace-memalloc-proc.$SLTIME.$REP & + dtracepid=$! + + echo started dtrace process, pid:$dtracepid + sleep $SLTIME + kill -SIGINT $dtracepid + echo sent interrupt signal to dtrace process + wait $dtracepid + echo dtrace finished + done +done diff --git a/resources/D scripts/trace-malloc-free.d b/resources/D scripts/trace-malloc-free.d deleted file mode 100644 index 64e0eb5..0000000 --- a/resources/D scripts/trace-malloc-free.d +++ /dev/null @@ -1,41 +0,0 @@ -#!/usr/sbin/dtrace -s - -/* - * Original version from : - # https://blogs.oracle.com/openomics/investigating-memory-leaks-with-dtrace - - adapted to only trace malloc and free operations - Removed internal logic of the d program as the java analyzer performes an analysis of the traces. -*/ - -#pragma D option quiet -#pragma D option aggrate=100us -#pragma D option aggsize=1g -#pragma D option bufpolicy=fill -#pragma D option bufsize=1g - -pid$1::malloc:entry -{ - self->size = arg0; -} - -pid$1::malloc:return -/self->size/ -{ - /* print details of the allocation */ - printf("<__%i;%Y;%d;malloc;0x%x;%d;\n",i++, walltimestamp, tid, arg1, self->size); - ustack(50); - printf("__>\n\n"); - self->size=0; -} - -pid$1::free:entry -{ - /* print details of the deallocation */ - printf("<__%i;%Y;%d;free;0x%x__>\n",i++, walltimestamp, tid, arg0); -} - -END -{ - printf("== FINISHED ==\n\n"); -} \ No newline at end of file diff --git a/resources/D scripts/trace-memalloc-proc.d b/resources/D scripts/trace-memalloc-proc.d new file mode 100644 index 0000000..67a033b --- /dev/null +++ b/resources/D scripts/trace-memalloc-proc.d @@ -0,0 +1,110 @@ +#!/usr/sbin/dtrace -s + +/* +* Dtrace script that logs the number of times call stacks allocated or freed memory +* The output of the script is further processed as described in +* https://github.com/ppissias/DTLeakAnalyzer +* +* Adapt the aggsize, aggsize and bufsize parameters accordingly if needed. +* Author Petros Pissias +*/ + +#pragma D option quiet +#pragma D option aggrate=100us +#pragma D option aggsize=100m +#pragma D option bufpolicy=fill +#pragma D option bufsize=100m + + +#!/usr/sbin/dtrace -s + +pid$1::malloc:entry +{ + self->trace = 1; + self->size = arg0; +} + +pid$1::malloc:return +/self->trace == 1/ +{ + @allocstacks[ustack(50)] = count(); + @counts["created"] = count(); + counts["pending"]++; + + self->trace = 0; + self->size = 0; +} + + +pid$1::realloc:entry +{ + self->trace = 1; + self->size = arg1; + self->oldptr = arg0; +} + +pid$1::realloc:return +/ (self->trace == 1) && (self->size == 0)/ +{ + /* this is same as free, size=0 */ + @deallocstacks[ustack(50)] = count(); + @counts["deleted"] = count(); + counts["pending"]--; + + self->trace = 0; + self->size = 0; + self->oldptr = 0; +} + +pid$1::realloc:return +/ (self->trace == 1) && (self->size > 0)/ +{ + /* this is a deallocation and a new allocation */ + @deallocstacks[ustack(50)] = count(); + @allocstacks[ustack(50)] = count(); + + self->trace = 0; + self->size = 0; + self->oldptr = 0; +} + +pid$1::calloc:entry +{ + self->trace = 1; + self->size = arg1; + self->nelements = arg0; +} + +pid$1::calloc:return +/self->trace == 1/ +{ + /* log the memory allocation */ + @allocstacks[ustack(50)] = count(); + @counts["created"] = count(); + counts["pending"]++; + + self->trace = 0; + self->size = 0; + self->nelements = 0; +} + + + +pid$1::free:entry +{ + @deallocstacks[ustack(50)] = count(); + @counts["deleted"] = count(); + counts["pending"]--; +} + +END +{ + printf("== FINISHED ==\n\n"); + printf("== allocation stacks ==\n\n"); + printa(@allocstacks); + printf("\n== deallocation stacks ==\n\n"); + printa(@deallocstacks); + printf("\n== mem allocations vs deletions ==\n\n"); + printa(@counts); + printf("number of allocations - number of deallocations: %d",counts["pending"]); +} diff --git a/resources/D scripts/trace-memalloc.d b/resources/D scripts/trace-memalloc.d new file mode 100644 index 0000000..c64e6a2 --- /dev/null +++ b/resources/D scripts/trace-memalloc.d @@ -0,0 +1,96 @@ +#!/usr/sbin/dtrace -s + +/* +* Thanks to : +* # http://www.brendangregg.com/Solaris/memoryflamegraphs.html +* # http://ewaldertl.blogspot.com/2010/09/debugging-memory-leaks-with-dtrace-and.html +* +* Dtrace script that logs all +* malloc, calloc, realloc and free calls and their call stacks +* +* The output of the script is further processed as described in +* https://github.com/ppissias/DTLeakAnalyzer +* +* Adapt the aggsize, aggsize and bufsize parameters accordingly if needed. +* Author Petros Pissias +*/ + +#pragma D option quiet +#pragma D option aggrate=100us +#pragma D option bufpolicy=fill +#pragma D option bufsize=100m + + +#!/usr/sbin/dtrace -s + +pid$1::malloc:entry +{ + self->trace = 1; + self->size = arg0; +} + +pid$1::malloc:return +/self->trace == 1/ +{ + /* log the memory allocation */ + printf("<__%i;%Y;%d;malloc;0x%x;%d;\n", i++, walltimestamp, tid, arg1, self->size); + ustack(50); + printf("__>\n\n"); + + self->trace = 0; + self->size = 0; +} + + +pid$1::realloc:entry +{ + self->trace = 1; + self->size = arg1; + self->oldptr = arg0; +} + +pid$1::realloc:return +/self->trace == 1/ +{ + /* log the memory re-allocation. Log the old memory address and the new memory address */ + printf("<__%i;%Y;%d;realloc;0x%x;0x%x;%d;\n", i++, walltimestamp, tid, self->oldptr, arg1, self->size); + ustack(50); + printf("__>\n\n"); + + self->trace = 0; + self->size = 0; + self->oldptr = 0; +} + +pid$1::calloc:entry +{ + self->trace = 1; + self->size = arg1; + self->nelements = arg0; +} + +pid$1::calloc:return +/self->trace == 1/ +{ + /* log the memory allocation with the total size*/ + printf("<__%i;%Y;%d;calloc;0x%x;%d;\n", i++, walltimestamp, tid, arg1, self->size*self->nelements); + ustack(50); + printf("__>\n\n"); + + self->trace = 0; + self->size = 0; + self->nelements = 0; +} + +pid$1::free:entry +{ + printf("<__%i;%Y;%d;free;0x%x;\n", i++, walltimestamp, tid, arg0); + ustack(50); + printf("__>\n\n"); +} + +END +{ + printf("== FINISHED ==\n\n"); +} + diff --git a/resources/D scripts/trace-new-delete.d b/resources/D scripts/trace-new-delete.d deleted file mode 100644 index 91469a0..0000000 --- a/resources/D scripts/trace-new-delete.d +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/sbin/dtrace -s - -/* - * Original version from : - # https://blogs.oracle.com/openomics/investigating-memory-leaks-with-dtrace - - adapted to trace also delete and new[] operations. - Removed internal logic of the d program as the java analyzer performes an analysis of the traces. - -*/ - -#pragma D option quiet -#pragma D option aggrate=100us -#pragma D option aggsize=1g -#pragma D option bufpolicy=fill -#pragma D option bufsize=1g - -/* -__1c2K6Fpv_v_ == void operator delete[](void*) -__1c2N6FI_pv_ == void*operator new[](unsigned) -__1c2k6Fpv_v_ == void operator delete(void*) -__1c2n6FI_pv_ == void*operator new(unsigned) -*/ - -/* operator new */ -pid$1::__1c2n6FI_pv_:entry -{ - /* log allocation size */ - self->size = arg0; -} - -pid$1::__1c2n6FI_pv_:return -/self->size/ -{ - /* print details of the allocation */ - printf("<__%i;%Y;%d;new;0x%x;%d;\n", i++, walltimestamp, tid, arg1, self->size); - ustack(50); - printf("__>\n\n"); - self->size=0; -} - - -/* delete operator */ -pid$1::__1c2k6Fpv_v_:entry -{ - /* print details of the deallocation */ - printf("<__%i;%Y;%d;delete;0x%x\n",i++, walltimestamp, tid, arg0); - ustack(50); - printf("__>\n\n"); -} - - -/* operator new[] , we log that this was created with new[]*/ -pid$1::__1c2N6FI_pv_:entry -{ - self->sizeArray = arg0; -} - -pid$1::__1c2N6FI_pv_:return -/self->sizeArray/ -{ - /* print details of the allocation */ - printf("<__%i;%Y;%d;new[];0x%x;%d;\n", i++, walltimestamp, tid, arg1, self->sizeArray); - ustack(50); - printf("__>\n\n"); - self->sizeArray=0; -} - - -/* delete[] operator */ -pid$1::__1c2K6Fpv_v_:entry -{ - /* print details of the deallocation */ - printf("<__%i;%Y;%d;delete[];0x%x\n",i++, walltimestamp, tid, arg0); - ustack(50); - printf("__>\n\n"); -} - - -END -{ - printf("== FINISHED ==\n\n"); -} \ No newline at end of file diff --git a/resources/D scripts/trace-procmem-increase.d b/resources/D scripts/trace-procmem-increase.d new file mode 100644 index 0000000..c6869a7 --- /dev/null +++ b/resources/D scripts/trace-procmem-increase.d @@ -0,0 +1,52 @@ +#!/usr/sbin/dtrace -s + +/* +* Thanks to : +* # http://www.brendangregg.com/Solaris/memoryflamegraphs.html +* +* Dtrace script that logs all call stacks that caused a process memory increase +* The output of the script is further processed as described in +* https://github.com/ppissias/DTLeakAnalyzer +* +* Author Petros Pissias +*/ + +#pragma D option quiet + +#!/usr/sbin/dtrace -s + +pid$1::brk:entry +{ + self->trace = 1; + self->newaddr = arg0; +} + +pid$1::brk:return +/self->trace == 1/ +{ + /* log the memory allocation */ + printf("<__%i;%Y;%d;brk;0x%x;%d;\n", i++, walltimestamp, tid, self->newaddr, arg1); + ustack(50); + printf("__>\n\n"); + + self->trace = 0; + self->newaddr = 0; +} + +pid$1::sbrk:entry +{ + self->trace = 1; + self->incrsize = arg0; +} + +pid$1::sbrk:return +/self->trace == 1/ +{ + /* log the memory allocation */ + printf("<__%i;%Y;%d;sbrk;0x%x;%d;\n", i++, walltimestamp, tid, arg1, self->incrsize); + ustack(50); + printf("__>\n\n"); + + self->trace = 0; + self->incrsize = 0; +} diff --git a/resources/DtLeakAnalyzer.docx b/resources/DtLeakAnalyzer.docx new file mode 100644 index 0000000..f4816a2 Binary files /dev/null and b/resources/DtLeakAnalyzer.docx differ diff --git a/resources/DtLeakAnalyzer.pdf b/resources/DtLeakAnalyzer.pdf new file mode 100644 index 0000000..81ad2c2 Binary files /dev/null and b/resources/DtLeakAnalyzer.pdf differ diff --git a/src/DTLeakAnalyzer.java b/src/DTLeakAnalyzer.java index 3b8db0f..e9e904f 100644 --- a/src/DTLeakAnalyzer.java +++ b/src/DTLeakAnalyzer.java @@ -1,151 +1,616 @@ import java.io.BufferedReader; +import java.io.File; +import java.io.FileNotFoundException; import java.io.FileReader; +import java.io.FilenameFilter; import java.io.IOException; import java.io.PrintWriter; +import java.io.UnsupportedEncodingException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; import java.util.Comparator; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.logging.LogManager; /** - * Tool for analzing logs produced with dtrace under Solaris in support of + * Tool for analyzing logs produced with dtrace (under Solaris and other platforms that support dtrace) in support of * memory leak investigations. * + * The processing logic is written as a single class file intentionally, as to + * try and provide a single "processing script" of the trace files. + * However the processing complexity has risen and it might be necessary to split the logic + * in multiple files if further processing logic is to be added. + * * @author Petros Pissias * */ public class DTLeakAnalyzer { - //this file contains the traces private final String inFile; - //this file eill contian the analysis of the traces + //this file will contain the analysis of the traces private final String outFile; //start / end trace sequences - private final String entryStartCharSequence = "<__"; - private final String entryEndCharSequence = "__>"; + private static final String entryStartCharSequence = "<__"; + private static final String entryEndCharSequence = "__>"; + + //the log file output writer + private final PrintWriter writer; + + //used for memory allocator analysis + private final List uniquePotentialLeakStacks; + private final List uniquePotentialLeakStacksNeverFreed; //more confident potential leaks + private final List uniquePotentialWrongFreeStacks; + private final List uniquePotentialWrongFreeStacksNeverCorrectlyFreed; //more confident wrong free/deletes + private int totalPoteltialLeakSuspects; + private int totalPotentialWrongFreeSuspects; + //counters + private int totalMallocCalls = 0; + private int totalCallocCalls = 0; + private int totalReallocCalls = 0; + private int totalFreeCalls = 0; + + //combined potential leak stack + private String combinedLeakStackSuspects = ""; + private String combinedLeakStackStrongSuspects = ""; + + //store information for combined file processing + private final List uniqueSuccessfulFreeStacks; //store unique stacks that correctly freed memory + private final List uniqueSuccessfullyDeletedStacks; //store unique stacks that allocated memory that was correctly freed + + //used for memory allocator analysis to detect double free operations + private final List uniqueDoubleFreeStacks; + private int totalDoubleFreeStacks; + + //used for brk processing + private final List uniqueBrkStacks; //all brk stacks along with their appearance frequency and size + private final List uniqueFailedBrkStacks; + private int totalBrkIncreaseStacks; + private int totalBrkDecreaseStacks; + private int totalBrkNeutralStacks; + private int totalBrkFailedStacks; + + //combined brk stack + private String combinedBrkStacks; + + //used for processed files analysis + private final List uniqueAllocationStacks; + private final List uniqueDeallocationStacks; + private final List uniqueUnfreedAllocationStacks; + private final List uniqueUnknownDeallocationStacks; + + public static void printArgs(){ + System.out.println("arguments: -f " );; + System.out.println("arguments: -d " ); + System.out.println("arguments: -p -d " ); + System.out.println(" = memalloc or brk.\nExample: -f memalloc inputFile outputFile"); + } + + /** + * Entry point to start the analysis tool + * @param args arguments: + */ + public static void main(String[] args) throws IOException{ + + if (args.length < 3 || args.length > 5) { + printArgs(); + return; + } + + if (args.length == 4) { + if (args[1].equals("memalloc")) { + //single file mode, generic + DTLeakAnalyzer dtLeakAnalyzer = new DTLeakAnalyzer(args[2], args[3]); + DTLeakAnalyzer.logMessage("Started memory allocator analysis for file "+args[2]+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.performMemoyAllocatorAnalysis(); + DTLeakAnalyzer.logMessage("Finished memory allocator analysis for file "+args[2]+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.printAnalysisInformation(TraceFileType.MEMALLOC); + } else if (args[1].equals("brk")) { + //single file mode, generic + DTLeakAnalyzer dtLeakAnalyzer = new DTLeakAnalyzer(args[2], args[3]); + DTLeakAnalyzer.logMessage("Started process memory increase analysis for file "+args[2]+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.performBrkAnalysis(); + DTLeakAnalyzer.logMessage("Finished process memory increase analysis for file "+args[2]+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.printAnalysisInformation(TraceFileType.BRK); + }else { + printArgs(); + return; + } + }else if (args.length == 3) { + if (args[0].equals("-d")) { + //directory mode , memory allocator analysis + + File[] files = new File(args[1]).listFiles(new FilenameFilter() { + + @Override + public boolean accept(File dir, String name) { + if (name.endsWith(".report")) { + return false; + } else { + return true; + } + } + + }); + Arrays.sort(files); + + //get each input file and do an analysis. Then write the combined results + Map fileAnalysisResults = new HashMap(); + for(File resultsFile : files) { + DTLeakAnalyzer dtLeakAnalyzer = new DTLeakAnalyzer(resultsFile.getAbsolutePath(), resultsFile.getAbsolutePath()+".report"); + DTLeakAnalyzer.logMessage("Started memory allocator analysis for file "+resultsFile+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.performMemoyAllocatorAnalysis(); + DTLeakAnalyzer.logMessage("Finished memory allocator analysis for file "+resultsFile+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.printAnalysisInformation(TraceFileType.MEMALLOC); + fileAnalysisResults.put(resultsFile, dtLeakAnalyzer); + } + + //now write the combined results + printMemoryAllocatorCombinedAnalysisResults(fileAnalysisResults, args[2]); + + } else { + printArgs(); + return; + } + } else if (args.length == 5 || args.length == 6) { + if (args[0].equals("-p") && args[2].equals("-d")) { + //directory mode for already "dtrace-processed" files + //in this mode, we process all the memory allocator trace files and gather information + //that is used in providing heuristics. + File[] memallocFiles = new File(args[3]).listFiles(new FilenameFilter() { + + @Override + public boolean accept(File dir, String name) { + if (name.endsWith(".report")) { + return false; + } else { + return true; + } + } + + }); + Arrays.sort(memallocFiles); + + //get the relationship information + Map> stackRelationships = getFreeMemoryStackRelationships(memallocFiles); + + //process processed files + + File[] processedfiles = new File(args[1]).listFiles(new FilenameFilter() { + + @Override + public boolean accept(File dir, String name) { + if (name.endsWith(".report")) { + return false; + } else { + return true; + } + } + + }); + Arrays.sort(processedfiles); + + //get each input file and do an analysis. Then write the combined results + Map fileAnalysisResults = new HashMap(); + for(File resultsFile : processedfiles) { + DTLeakAnalyzer dtLeakAnalyzer = new DTLeakAnalyzer(resultsFile.getAbsolutePath(), resultsFile.getAbsolutePath()+".report"); + DTLeakAnalyzer.logMessage("Started processing file "+resultsFile+" on:"+new Date(), true, dtLeakAnalyzer.writer); + dtLeakAnalyzer.performProcessedFileAnalysis(stackRelationships); + DTLeakAnalyzer.logMessage("Finished processing file "+resultsFile+" on:"+new Date(), true, dtLeakAnalyzer.writer); + int numAlloc = 0; + int numDealloc=0; + for (StackOccurence rep :dtLeakAnalyzer.uniqueAllocationStacks) { + numAlloc += rep.timesFound; + } + for (StackOccurence rep :dtLeakAnalyzer.uniqueDeallocationStacks) { + numDealloc += rep.timesFound; + } + DTLeakAnalyzer.logMessage("Found "+numAlloc+" memory allocation calls", true, dtLeakAnalyzer.writer); + DTLeakAnalyzer.logMessage("Found "+dtLeakAnalyzer.uniqueAllocationStacks.size()+" unique memory allocation stacks", true, dtLeakAnalyzer.writer); + DTLeakAnalyzer.logMessage("Found "+numDealloc+" memory de-allocation calls", true, dtLeakAnalyzer.writer); + DTLeakAnalyzer.logMessage("Found "+dtLeakAnalyzer.uniqueDeallocationStacks.size()+" unique memory de-allocation stacks", true, dtLeakAnalyzer.writer); + + int numAllocUnfreed = 0; + int numDeallocUnknown = 0; + for (StackOccurence rep :dtLeakAnalyzer.uniqueUnfreedAllocationStacks) { + numAllocUnfreed += rep.timesFound; + } + for (StackOccurence rep :dtLeakAnalyzer.uniqueUnknownDeallocationStacks) { + numDeallocUnknown += rep.timesFound; + } + DTLeakAnalyzer.logMessage("Found "+numAllocUnfreed+" (unfreed) memory allocation calls from "+dtLeakAnalyzer.uniqueUnfreedAllocationStacks.size()+" unique allocation stacks (suspect memory leaks)", true, dtLeakAnalyzer.writer); + DTLeakAnalyzer.logMessage("Found "+numDeallocUnknown+" unknown free calls from "+dtLeakAnalyzer.uniqueUnknownDeallocationStacks.size()+" unique free stacks", true, dtLeakAnalyzer.writer); + + DTLeakAnalyzer.logMessage("number of memory allocation calls - number of free calls = "+(numAlloc-numDealloc)+"\n", true, dtLeakAnalyzer.writer); + fileAnalysisResults.put(resultsFile, dtLeakAnalyzer); + dtLeakAnalyzer.writer.close(); + } + + //now write the combined results + boolean printNormalStacks = false; + if (args.length == 6) { + printNormalStacks = true; + } + printProcessedFilesCombinedAnalysisResults(fileAnalysisResults, args[4], printNormalStacks); + }else { + printArgs(); + return; + } + } + } /** * new instance of the analyzer * @param inFile the traces * @param outFile the output analysis of the traces that will be produced + * @throws UnsupportedEncodingException + * @throws FileNotFoundException * @throws Exception in case the traces cannot be parsed or the files cannot be accessed */ - public DTLeakAnalyzer(String inFile, String outFile) { + public DTLeakAnalyzer(String inFile, String outFile) throws FileNotFoundException, UnsupportedEncodingException { this.inFile = inFile; this.outFile = outFile; + + uniquePotentialLeakStacks = new ArrayList(); + uniquePotentialLeakStacksNeverFreed = new ArrayList(); + uniquePotentialWrongFreeStacks = new ArrayList(); + uniqueDoubleFreeStacks = new ArrayList(); + uniquePotentialWrongFreeStacksNeverCorrectlyFreed = new ArrayList(); + uniqueBrkStacks = new ArrayList(); + uniqueFailedBrkStacks = new ArrayList(); + + //store some for combined operations + uniqueSuccessfulFreeStacks = new ArrayList(); + uniqueSuccessfullyDeletedStacks = new ArrayList(); + + //for dtrace-processed files + uniqueAllocationStacks = new ArrayList(); + uniqueDeallocationStacks = new ArrayList(); + uniqueUnfreedAllocationStacks = new ArrayList(); + uniqueUnknownDeallocationStacks = new ArrayList(); ; + + //open output file + if (outFile == null) { + writer = null; + } else { + writer = new PrintWriter(outFile, "UTF-8"); + } } + /** - * Performs the traces analysis + * Returns the relationships between stacks that free memory and stacks that allocated memory + * More specifically, it links each stack that freed memory, with the stack(s) that had allocated the memory + * @param memallocFiles the memory allocator trace files + * @return the relationship map * @throws IOException */ - public void performAnalysis() throws IOException { - //open output file - PrintWriter writer = new PrintWriter(outFile, "UTF-8"); + public static Map> getFreeMemoryStackRelationships(File[] memallocFiles) throws IOException { + Map> stackRelationshipMap = new HashMap>(); - logMessage("Trace analyser started on "+new Date(), writer, true); + System.out.println("Collecting memory allocator stack relationships"); + for(File resultsFile : memallocFiles) { + System.out.println("processing file:"+resultsFile.getAbsolutePath()); + //map to keep track of memory allocations + Map memoryAllocation = new HashMap(); + + //open the traces file + try (BufferedReader br = new BufferedReader(new FileReader(resultsFile.getAbsolutePath()))) { - //map to keep track of memory allocations - Map memoryAllocation = new HashMap(); - - //map to save new[] and delete[] operations to detect issues with new and delete - Map memoryArrayAllocation = new HashMap(); + //read all entries + MemoryAllocatorTraceEntry traceEntry = null; + while ((traceEntry = readMemoryAllocatorTraceEntry(br)) != null) { + //now process the entry + if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.MALLOC)) { + //sanity check + if (memoryAllocation.containsKey(traceEntry.getAddress())) { + //this should not happen. + throw new IOException("Found allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress())); + } + + //add to map + memoryAllocation.put(traceEntry.getAddress(), traceEntry); + + } else if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.CALLOC)) { + //sanity check + if (memoryAllocation.containsKey(traceEntry.getAddress())) { + //this should not happen. + throw new IOException("Found allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress())); + } + + //add to map + memoryAllocation.put(traceEntry.getAddress(), traceEntry); + + } else if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.REALLOC)) { + //sanity check + if (traceEntry.getAddress().equals(traceEntry.getPreviousAddress())) { + //the realloc did not move the memory address, no need to do something + + //add to map, updating the previous entry if it exists + memoryAllocation.put(traceEntry.getAddress(), traceEntry); + } else { + //new address, the realloc moved the memory + if (memoryAllocation.containsKey(traceEntry.getAddress())) { + //this should not happen. + throw new IOException("Found allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress())); + } + + //add the new address of the allocation + memoryAllocation.put(traceEntry.getAddress(), traceEntry); + + //remove previous allocation + MemoryAllocatorTraceEntry removed = memoryAllocation.remove(traceEntry.getPreviousAddress()); + + //add the previous deallocation to the relationships + //keep a reference of this successful delete stack + /** + * We are doing this special handling here for realloc, because on the .proc d-script + * we treat realloc calls as an allocation and a de-allocation. + * So we must associate the relevant stack as being deleted by this deallocation. + */ + boolean found = false; + OUTTER_LOOP: + for (StackOccurence existingSuccesfulFree : stackRelationshipMap.keySet()) { + if (existingSuccesfulFree.getStack().equals(traceEntry.getCallStack())) { + found = true; + existingSuccesfulFree.increaseTimesFound(); + //add this stack if it does not exist + List relatedAllocationStacks = stackRelationshipMap.get(existingSuccesfulFree); + boolean foundRelatedStack = false; + INNER_LOOP: + for (StackOccurence relatedStackOccurence : relatedAllocationStacks) { + //check if the freed memory was from a stack that we already know + if (relatedStackOccurence.getStack().equals(removed.getCallStack())) { + foundRelatedStack = true; + relatedStackOccurence.increaseTimesFound(); + break INNER_LOOP; + } + } + if (!foundRelatedStack) { //related stack not found + relatedAllocationStacks.add(new StackOccurence(removed.getCallStack())); + } + break OUTTER_LOOP; + } + } + if (!found) { //free stack not found + //create a list and add the stack that its memory allocation was successfully freed + List allocationStacks = new ArrayList(); + allocationStacks.add(new StackOccurence(removed.getCallStack())); + //add the free stack along with the list + stackRelationshipMap.put(new StackOccurence(traceEntry.getCallStack()), allocationStacks); + } + + } + + }else if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.FREE)) { + //check if it exists already on the map + if (memoryAllocation.containsKey(traceEntry.getAddress())) { + //as expected, we had an allocation and this is the de-allocation + MemoryAllocatorTraceEntry removed = memoryAllocation.remove(traceEntry.getAddress()); + + //keep a reference of this successful delete stack + boolean found = false; + OUTTER_LOOP: + for (StackOccurence existingSuccesfulFree : stackRelationshipMap.keySet()) { + if (existingSuccesfulFree.getStack().equals(traceEntry.getCallStack())) { + found = true; + existingSuccesfulFree.increaseTimesFound(); + //add this stack if it does not exist + List relatedAllocationStacks = stackRelationshipMap.get(existingSuccesfulFree); + boolean foundRelatedStack = false; + INNER_LOOP: + for (StackOccurence relatedStackOccurence : relatedAllocationStacks) { + //check if the freed memory was from a stack that we already know + if (relatedStackOccurence.getStack().equals(removed.getCallStack())) { + foundRelatedStack = true; + relatedStackOccurence.increaseTimesFound(); + break INNER_LOOP; + } + } + if (!foundRelatedStack) { //related stack not found + relatedAllocationStacks.add(new StackOccurence(removed.getCallStack())); + } + break OUTTER_LOOP; + } + } + if (!found) { //free stack not found + //create a list and add the stack that its memory allocation was successfully freed + List allocationStacks = new ArrayList(); + allocationStacks.add(new StackOccurence(removed.getCallStack())); + //add the free stack along with the list + stackRelationshipMap.put(new StackOccurence(traceEntry.getCallStack()), allocationStacks); + } + + + } else { + //not expected, but can happen since we are not monitoring all allocations from the beginning of the execution + + } + + } else { + throw new IOException("Cannot handle entry type:"+traceEntry.getType()); + } + } + }catch (IOException e) { + System.out.println("problem reading input (traces) file:"+e.getMessage()); + + throw e; + } + } - //keep the previous entry to associate new[] calls. Keep the last entry per thread - Map previousEntries = new HashMap(); + int valuesCount = 0; + for (List values : stackRelationshipMap.values()) { + valuesCount += values.size(); + } + System.out.println("found in total "+stackRelationshipMap.size()+" unique free stacks, that freed memory allocated from "+valuesCount+" stacks\n"); + return stackRelationshipMap; + } + + + /** + * Performs the traces analysis for a generic program (using free/malloc/realloc/calloc) + * @throws IOException + */ + public void performMemoyAllocatorAnalysis() throws IOException { + //map to keep track of memory allocations + Map memoryAllocation = new HashMap(); + + //list to keep track of free operations to unallocated memory + List freeUnallocagedMemoryStacks = new ArrayList(); + + //map to keep track of memory de-allocations related to free operations, for detecting double free operations + Map freedAndNotReusedMemory = new HashMap(); - //array to keep detected wrong deletes - List wrongDeletes = new ArrayList(); + //list to keep track of double free stacks (errors) + List doubleFree = new ArrayList(); - //open the traces file and process each line + //open the traces file try (BufferedReader br = new BufferedReader(new FileReader(inFile))) { //read all entries - DTLeakLogEntry traceEntry = null; - while ((traceEntry = readTraceEntry(br)) != null) { - - //first do some consistency checks - if (previousEntries.get(traceEntry.getThreadId()) != null) { - if (previousEntries.get(traceEntry.getThreadId()).getType().equals(DTLeakLogEntryType.NEWARRAY)){ - if (!traceEntry.getType().equals(DTLeakLogEntryType.NEW)) { - logMessage("Previous entry was new[] but this one is not new\n previous:"+previousEntries.get(traceEntry.getThreadId())+"\n\ncurrent:"+traceEntry, writer, true); - throw new IOException("Error: Previous entry was new[] but this one is not new. See log file for more information"); - } + MemoryAllocatorTraceEntry traceEntry = null; + while ((traceEntry = readMemoryAllocatorTraceEntry(br)) != null) { + + //now process the entry + if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.MALLOC)) { + totalMallocCalls++; + //sanity check + if (memoryAllocation.containsKey(traceEntry.getAddress())) { + //this should not happen. + throw new IOException("Entry:"+traceEntry+"\nFound allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress())); } - if (previousEntries.get(traceEntry.getThreadId()).getType().equals(DTLeakLogEntryType.DELETEARRAY)){ - if (!traceEntry.getType().equals(DTLeakLogEntryType.DELETE)) { - logMessage("Previous entry was delete[] but this one is not delete\n previous:"+previousEntries.get(traceEntry.getThreadId())+"\n\ncurrent:"+traceEntry, writer, true); - throw new IOException("Error: Previous entry was delete[] but this one is not delete. See log file for more information"); - } + //add to map + memoryAllocation.put(traceEntry.getAddress(), traceEntry); + + if (freedAndNotReusedMemory.containsKey(traceEntry.getAddress())) { + //System.out.println("removing from memoryFree list:"+traceEntry.getAddress()); + //we now re-use memory that was freed, remove the address from the map + freedAndNotReusedMemory.remove(traceEntry.getAddress()); } - } - //now process the entry - if (traceEntry.getType().equals(DTLeakLogEntryType.NEW)) { + + } else if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.CALLOC)) { + totalCallocCalls++; //sanity check if (memoryAllocation.containsKey(traceEntry.getAddress())) { //this should not happen. - logMessage("Found allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress()), writer, true); - throw new IOException("Error: memory allocation in allocated memory. Corrupted trace file. See log file for more information"); + throw new IOException("Entry:"+traceEntry+"\nFound allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress())); } //add to map memoryAllocation.put(traceEntry.getAddress(), traceEntry); - if (previousEntries.get(traceEntry.getThreadId()) != null) { - //check if this was a new[] call - if (previousEntries.get(traceEntry.getThreadId()).getType().equals(DTLeakLogEntryType.NEWARRAY)){ - //add to list - memoryArrayAllocation.put(traceEntry.getAddress(), traceEntry); + + if (freedAndNotReusedMemory.containsKey(traceEntry.getAddress())) { + //System.out.println("removing from memoryFree list:"+traceEntry.getAddress()); + //we now re-use memory that was freed, remove the address from the map + freedAndNotReusedMemory.remove(traceEntry.getAddress()); + } + + } else if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.REALLOC)) { + totalReallocCalls++; + //sanity check + if (traceEntry.getAddress().equals(traceEntry.getPreviousAddress())) { + //the realloc did not move the memory address, no need to do something + + //add to map, updating the previous entry if it exists + memoryAllocation.put(traceEntry.getAddress(), traceEntry); + } else { + //new address, the realloc moved the memory + if (memoryAllocation.containsKey(traceEntry.getAddress())) { + //this should not happen. + throw new IOException("Entry:"+traceEntry+"\nFound allocation on memory address:"+traceEntry.getAddress()+" that was already allocated by: "+memoryAllocation.get(traceEntry.getAddress())); } + + //remove previous allocation + memoryAllocation.remove(traceEntry.getPreviousAddress()); + //add the new address of the allocation + memoryAllocation.put(traceEntry.getAddress(), traceEntry); } - } else if (traceEntry.getType().equals(DTLeakLogEntryType.DELETE)) { + + if (freedAndNotReusedMemory.containsKey(traceEntry.getAddress())) { + //System.out.println("removing from memoryFree list:"+traceEntry.getAddress()); + //we now re-use memory that was freed, remove the address from the map + freedAndNotReusedMemory.remove(traceEntry.getAddress()); + } + + }else if (traceEntry.getType().equals(MemoryAllocationTraceEntryType.FREE)) { + totalFreeCalls++; //check if it exists already on the map if (memoryAllocation.containsKey(traceEntry.getAddress())) { //as expected, we had an allocation and this is the de-allocation - memoryAllocation.remove(traceEntry.getAddress()); - } else { - //not expected, but can happen since we are not monitoring all allocation from the beginning of the execution - logMessage("Info: found deallocation from unallocated memory:"+traceEntry.getAddress(), writer, false); - } - if (memoryArrayAllocation.containsKey(traceEntry.getAddress())) { - //logMessage("Error: found delete for memory that was allocated with new[]\n:"+traceEntry+"\n", writer, false); - //remove so that we do not detect it again and again - DTLeakLogEntry arrayAllocationEntry = memoryArrayAllocation.remove(traceEntry.getAddress()); - - //store this error - wrongDeletes.add(new DTLeakWrongDeleteEntry(traceEntry, arrayAllocationEntry)); - } - } else if (traceEntry.getType().equals(DTLeakLogEntryType.NEWARRAY)) { - //found new[] entry - //no need to do something at this point, it will be checked on the next entry which will be a new operation - } else if (traceEntry.getType().equals(DTLeakLogEntryType.DELETEARRAY)) { - if (!memoryArrayAllocation.containsKey(traceEntry.getAddress())) { - logMessage("Info: found array deallocation from unallocated memory:"+traceEntry.getAddress(), writer, false); + MemoryAllocatorTraceEntry removed = memoryAllocation.remove(traceEntry.getAddress()); + + //keep a reference of this successful delete stack + boolean found = false; + for (MemoryAllocatorTraceEntry existingSuccesfulDelete : uniqueSuccessfulFreeStacks) { + if (existingSuccesfulDelete.getCallStack().equals(traceEntry.getCallStack())) { + found = true; + break; + } + } + if (!found) { + uniqueSuccessfulFreeStacks.add(traceEntry); + } + + //keep a reference of the successfully deleted stack + boolean foundRemoved = false; + for (MemoryAllocatorTraceEntry existingSuccesfullyDeleted : uniqueSuccessfullyDeletedStacks) { + if (existingSuccesfullyDeleted.getCallStack().equals(removed.getCallStack())) { + foundRemoved = true; + break; + } + } + + if (!foundRemoved) { + uniqueSuccessfullyDeletedStacks.add(removed); + } + + //add to the map to keep track for double free operations + if (freedAndNotReusedMemory.containsKey(traceEntry.getAddress())) { + //this is an error + throw new IOException("Entry:"+traceEntry+"\nFound free on memory address:"+traceEntry.getAddress()+" that was succesfully removed from the memory allocation map, but appears also on the freed and not reused addresses"); + } else { + //does not contain + freedAndNotReusedMemory.put(traceEntry.getAddress(), traceEntry); + } + } else { - //remove - memoryArrayAllocation.remove(traceEntry.getAddress()); + //not expected, but can happen since we are not monitoring all allocations from the beginning of the execution + + //log this stack that did a free on unallocated memory + freeUnallocagedMemoryStacks.add(traceEntry); + + if (freedAndNotReusedMemory.containsKey(traceEntry.getAddress())) { + //System.out.println("adding to doubleFree list:"+traceEntry.getAddress()); + //double free! log the error + doubleFree.add(traceEntry); + } else { + //System.out.println("adding to freeMemory map:"+traceEntry.getAddress()); + //log the address that the free was done + freedAndNotReusedMemory.put(traceEntry.getAddress(), traceEntry); + } + } + } else { + throw new IOException("Cannot handle entry type:"+traceEntry.getType()); } - //save entry to associate on new[] operations - previousEntries.put(traceEntry.getThreadId(), traceEntry); } - logMessage("\n\nProcessing wrong deletes [delete on memory that was allocated with new[]), found "+wrongDeletes.size()+" instances", writer, true); + totalDoubleFreeStacks = doubleFree.size(); - //find unique cases and store them - List uniqueWrongDeleteStacks = new ArrayList(); - for (DTLeakWrongDeleteEntry entry : wrongDeletes) { + //find unique cases for qrong deletes and store them + for (MemoryAllocatorTraceEntry entry : doubleFree) { boolean found = false; - for (DTLeakWrongDeleteReportEntry reportEntry : uniqueWrongDeleteStacks) { - if (reportEntry.getStack().equals(entry.getDeleteLogEntry().getCallStack())) { + for (StackOccurence reportEntry : uniqueDoubleFreeStacks) { + if (reportEntry.getStack().equals(entry.getCallStack())) { found = true; //increase counter reportEntry.increaseTimesFound(); @@ -153,34 +618,70 @@ public void performAnalysis() throws IOException { } } if (!found) { - uniqueWrongDeleteStacks.add(new DTLeakWrongDeleteReportEntry(entry.getDeleteLogEntry().getCallStack(), 1, entry)); + uniqueDoubleFreeStacks.add(new StackOccurence(entry.getCallStack())); } } - - logMessage("found "+uniqueWrongDeleteStacks.size()+" unique wrong delete stacks", writer, true); + //sort list according to times found - Collections.sort(uniqueWrongDeleteStacks, new Comparator () { + Collections.sort(uniqueDoubleFreeStacks, new Comparator () { @Override - public int compare(DTLeakWrongDeleteReportEntry o1, - DTLeakWrongDeleteReportEntry o2) { + public int compare(StackOccurence o1, + StackOccurence o2) { return o2.getTimesFound()-o1.getTimesFound(); } }); - for (DTLeakWrongDeleteReportEntry wrongDeleteStack : uniqueWrongDeleteStacks) { - logMessage("Found wrong delete stack "+wrongDeleteStack.getTimesFound()+" times. Stack:\n"+wrongDeleteStack.getStack(), writer, false); - logMessage("Example of this allocation. Allocation stack:\n\n"+wrongDeleteStack.getExample().getArrayAllocationLogEntry()+"\n\n", writer, false); + + //now process deletes on wrong addresses. + totalPotentialWrongFreeSuspects = freeUnallocagedMemoryStacks.size(); + for (MemoryAllocatorTraceEntry entry : freeUnallocagedMemoryStacks) { + boolean found = false; + for (StackOccurence reportEntry : uniquePotentialWrongFreeStacks) { + if (reportEntry.getStack().equals(entry.getCallStack())) { + found = true; + //increase counter + reportEntry.increaseTimesFound(); + break; + } + } + if (!found) { + uniquePotentialWrongFreeStacks.add(new StackOccurence(entry.getCallStack())); + } } + + + //sort based on frequency + Collections.sort(uniquePotentialWrongFreeStacks, new Comparator() { + + @Override + public int compare(StackOccurence o1, StackOccurence o2) { + return o2.getTimesFound() - o1.getTimesFound(); + } + + }); - logMessage("Analyzing "+memoryAllocation.keySet().size()+" potential memory leaks", writer, true); + //for each unique unallocated delete stack, now find the ones that have never freed successfully memory + for (StackOccurence entry :uniquePotentialWrongFreeStacks) { + boolean found = false; + for (MemoryAllocatorTraceEntry sucDeleteEntry : uniqueSuccessfulFreeStacks) { + if (sucDeleteEntry.getCallStack().equals(entry.getStack())) { + found = true; + break; + } + } + if (!found) { + //this stack has never correctly freed / deleted memory + uniquePotentialWrongFreeStacksNeverCorrectlyFreed.add(entry); + } + } //second step, analyze non empty memory allocations on the map to find unique call stacks - List uniquePotentialLeakStacks = new ArrayList(); + totalPoteltialLeakSuspects = memoryAllocation.keySet().size(); for (String memoryAddress : memoryAllocation.keySet()) { - DTLeakLogEntry unallocatedMemoryCallStack = memoryAllocation.get(memoryAddress); + MemoryAllocatorTraceEntry unallocatedMemoryCallStack = memoryAllocation.get(memoryAddress); boolean found = false; - for (DTLeakReportEntry uniquePLeak :uniquePotentialLeakStacks) { + for (StackOccurence uniquePLeak :uniquePotentialLeakStacks) { if (uniquePLeak.getStack().equals(unallocatedMemoryCallStack.getCallStack())) { found = true; uniquePLeak.increaseTimesFound(); @@ -189,147 +690,1440 @@ public int compare(DTLeakWrongDeleteReportEntry o1, } if (!found) { //insert for the first time - uniquePotentialLeakStacks.add(new DTLeakReportEntry(unallocatedMemoryCallStack.getCallStack(), 1)); + uniquePotentialLeakStacks.add(new StackOccurence(unallocatedMemoryCallStack.getCallStack())); } } - //sort - Collections.sort(uniquePotentialLeakStacks, new Comparator() { + //sort based on frequency + Collections.sort(uniquePotentialLeakStacks, new Comparator() { @Override - public int compare(DTLeakReportEntry o1, DTLeakReportEntry o2) { + public int compare(StackOccurence o1, StackOccurence o2) { return o2.getTimesFound() - o1.getTimesFound(); } }); + //now calculate from the potential leaks, the ones that have never been freed + for (StackOccurence entry :uniquePotentialLeakStacks) { + boolean found = false; + for (MemoryAllocatorTraceEntry sucDeletedStackEntry : uniqueSuccessfullyDeletedStacks) { + if (sucDeletedStackEntry.getCallStack().equals(entry.getStack())) { + found = true; + break; + } + } + if (!found) { + //this stack has never correctly freed / deleted memory + uniquePotentialLeakStacksNeverFreed.add(entry); + } + } - - logMessage("Processing completed.\nDetected "+uniquePotentialLeakStacks.size()+" potential memory leaks\n", writer, true); - - - for (DTLeakReportEntry suspectCallStack : uniquePotentialLeakStacks) { - logMessage("Suspect leak stack found "+suspectCallStack.getTimesFound()+" times",writer, false); - logMessage(suspectCallStack.getStack()+"\n\n",writer, false); + //calculate combined suspect leak stack + + if (uniquePotentialLeakStacks.size() > 1) { + //initial conditions for the combined common stack print + int stackDepth = 0; + //create initial positions array (all of them) + Integer[] positions = new Integer[uniquePotentialLeakStacks.size()]; + for (int i=0;i 1) { + int stackDepth = 0; + Integer[] positions = new Integer[uniquePotentialLeakStacksNeverFreed.size()]; + for (int i=0;i brkAllocationStacks = new ArrayList(); + List brkDeAllocationStacks = new ArrayList(); + List failedBrkCalls = new ArrayList(); + List noIncreaseCalls = new ArrayList(); + + //open the traces file and process each line + try (BufferedReader br = new BufferedReader(new FileReader(inFile))) { + + //read all entries + BrkTraceEntry traceEntry = null; + while ((traceEntry = readBrkTraceEntry(br)) != null) { + + //now process the entry + if (traceEntry.getType().equals(BrkTraceEntryType.BRK)) { + if (traceEntry.isSuccess()) { + if (currentBrkAddress == 0) { + //first time + currentBrkAddress = Long.decode(traceEntry.getAddress()); + } else { + //we already have a break address + //decode new brk address + long newBrkAddress = Long.decode(traceEntry.getAddress()); + //calculate mem increase + long memIncrease = newBrkAddress - currentBrkAddress; + //store new current brk address + currentBrkAddress = newBrkAddress; + + if (memIncrease == 0) { + noIncreaseCalls.add(traceEntry); + } else if (memIncrease < 0) { + brkDeAllocationStacks.add(traceEntry); + } else if (memIncrease > 0) { + brkAllocationStacks.add(traceEntry); + } + } + } else { + //failed brk call + failedBrkCalls.add(traceEntry); + } + + } else if (traceEntry.getType().equals(BrkTraceEntryType.SBRK)) { + if (traceEntry.isSuccess()) { + + long previousBrkAddress = Long.decode(traceEntry.getAddress()); + long memIncrease = traceEntry.getSize(); + + long newBrkAddress = previousBrkAddress + memIncrease; + //store new current brk address + currentBrkAddress = newBrkAddress; + + if (memIncrease == 0) { + noIncreaseCalls.add(traceEntry); + } else if (memIncrease < 0) { + brkDeAllocationStacks.add(traceEntry); + } else if (memIncrease > 0) { + brkAllocationStacks.add(traceEntry); + } + } else { + //failed brk call + failedBrkCalls.add(traceEntry); + } + } else { + throw new IOException("Cannot handle entry type:"+traceEntry.getType()); + } + + } + + //now we need to process all decoded entries + //calculate totals + totalBrkIncreaseStacks = brkAllocationStacks.size(); + totalBrkDecreaseStacks = brkDeAllocationStacks.size(); + totalBrkNeutralStacks = noIncreaseCalls.size(); + totalBrkFailedStacks = failedBrkCalls.size(); + + //now get unique failed stacks + for (BrkTraceEntry entry : failedBrkCalls) { + //see if this already exists on the unique list + boolean found = false; + for (int i=0;i allBrkStacks = new ArrayList(); + allBrkStacks.addAll(brkAllocationStacks); + allBrkStacks.addAll(brkDeAllocationStacks); + + for (BrkTraceEntry entry : allBrkStacks) { + //see if this already exists on the unique list + boolean found = false; + for (int i=0;i() { + @Override + public int compare(BrkStackOccurence o1, + BrkStackOccurence o2) { + return o2.getTimesFound()-o1.getTimesFound(); + } + + }); + + //initial conditions for the combined common stack print + int stackDepth = 0; + //create initial positions array (all of them) + Integer[] positions = new Integer[uniqueBrkStacks.size()]; + for (int i=0;i> stackRelationships) throws IOException { + + //open the traces file and process each line + try (BufferedReader br = new BufferedReader(new FileReader(inFile))) { + + positionNextEntryOnProcessedFile(br); + positionNextEntryOnProcessedFile(br); + + //read all entries, first we have the allocation stacks + StackOccurence traceEntry = null; + while ((traceEntry = readProcessedTraceEntry(br)) != null) { + //we might have top level memory allocator calls twice, because of their different return addresses + boolean found = false; + for (StackOccurence existingAllocStack : uniqueAllocationStacks) { + if (existingAllocStack.getStack().equals(traceEntry.getStack())) { + //already exists, increase + existingAllocStack.increaseTimesFound(traceEntry.getTimesFound()); + found = true; + break; + } + } + if (!found) { + //first time + uniqueAllocationStacks.add(traceEntry); + } + } + + //now the deallocation stacks + while ((traceEntry = readProcessedTraceEntry(br)) != null) { + //we might have top level memory allocator calls twice, because of their different return addresses + boolean found = false; + for (StackOccurence existingDeAllocStack : uniqueDeallocationStacks) { + if (existingDeAllocStack.getStack().equals(traceEntry.getStack())) { + //already exists, increase + existingDeAllocStack.increaseTimesFound(traceEntry.getTimesFound()); + found = true; + break; + } + } + if (!found) { + //first time + uniqueDeallocationStacks.add(traceEntry); + } + } + + //copy all of them, they will eventually be removed as they are located + uniqueUnfreedAllocationStacks.addAll(uniqueAllocationStacks); + + //for each free + //String matchFree = "MONHND.exe`__1cPI2_MONHND_ToposEnext6M_pnbAI1_MONHND_UpdateableObject__+0x5c"; + //String matchAllocation = "MONHND.exe`__1cGDLList4CI_Jins_after6MpvrkI_1_+0x48"; + + for (StackOccurence uniqueDeallocationStack : uniqueDeallocationStacks) { + //if (uniqueDeallocationStack.getStack().contains(matchFree)) { + // System.out.println("Examining stack ("+uniqueDeallocationStack.getTimesFound()+") \n"+uniqueDeallocationStack.getStack()); + //} + //check if we have found a match of it + boolean foundDeallocationStack = false; + for (StackOccurence freeRelationshipStack : stackRelationships.keySet()) { + + if (uniqueDeallocationStack.getStack().equals(freeRelationshipStack.getStack())) { + //System.out.println("\nLocated deallocation stack in relationships:\n"+uniqueDeallocationStack.getStack()+"\n"); + + //found match + foundDeallocationStack=true; + //get all stacks that this free released memory from + List relatedAllocations = stackRelationships.get(freeRelationshipStack); + for (StackOccurence relatedAllocationStack : relatedAllocations) { + //if (uniqueDeallocationStack.getStack().contains(matchFree)) { + // System.out.println("Examining related allocation stack ("+relatedAllocationStack.getTimesFound()+") \n"+relatedAllocationStack.getStack()); + //} + //check all allocation stacks + List foundStacks = new ArrayList(); + for (StackOccurence unfreedAllocationStack : uniqueUnfreedAllocationStacks) { + /* workaround for some .d scripts + if (unfreedAllocationStack.getStack().equals(relatedAllocationStack.getStack().replaceAll("malloc\\+0x64", "malloc"))) { + foundStacks.add(unfreedAllocationStack); + }*/ + //System.out.println("\n### Comparing \n"+unfreedAllocationStack.getStack()+"\n\n with:\n"+relatedAllocationStack.getStack()); + if (unfreedAllocationStack.getStack().equals(relatedAllocationStack.getStack())) { + foundStacks.add(unfreedAllocationStack); + //System.out.println("\n***Found match***\n"); + } else { + + } + + } + //remove all found + uniqueUnfreedAllocationStacks.removeAll(foundStacks); + } + } + } + if (!foundDeallocationStack) { + //add to unknown free stacks + uniqueUnknownDeallocationStacks.add(uniqueDeallocationStack); + //System.out.println("\n==>Did not locate deallocation stack in relationships:\n"+uniqueDeallocationStack.getStack()+"\n"); + + } + } + + + }catch (IOException e) { + System.out.println("problem reading input (traces) file:"+e.getMessage()); + + throw e; + } + } + + + /** + * This method combines the information from all unique call stacks to show all memory that was allocated + * and from which place. It is a different look at the same data, combining the call stacks to see their relation + * @param stackDepth the stack depth, start with 0 + * @param elementPositions pointers to the element positions + * @param stackElements the stack elements + * @throws IOException in case a stack trace has unexpected data + */ + private String getMergedMemoryAllocatorStack(int stackDepth, Integer[] elementPositions, List stackElements) throws IOException { + //all elements have the same stack up to here. + + //check if we reached a leaf + if (elementPositions.length == 1) { + //this is a leaf + StackOccurence leafStackEntry = stackElements.get(elementPositions[0]); + String[] leafStack = getCallStack(leafStackEntry.getStack()); + //create the formatted stack lines + StringBuffer stackLines = new StringBuffer(); + for (int i=stackDepth;i> commonStackSets = new HashMap>(); + for (int pos : elementPositions) { + + //examine all elements and split into sets that have the same stack at this level + StackOccurence stackReportEntry = stackElements.get(pos); + String[] callStack = getCallStack(stackReportEntry.getStack()); + if (callStack.length < stackDepth+1) { + //this stack has reached its end. Should not happen? + throw new IOException("Found stack that does not have a next element:\n"+stackReportEntry+"\n"); + } else { + //OK we go in it has a next element + //get the next stack element + String currentStackElement = callStack[stackDepth+1]; + if (commonStackSets.containsKey(currentStackElement)) { + //System.out.println("common stack array contains key"); + commonStackSets.get(currentStackElement).add(pos); + } else { + //first one + List elementPointers = new ArrayList(); + elementPointers.add(pos); + commonStackSets.put(currentStackElement, elementPointers); + } + } + } + + //recursively go through the next stack level + for (String commonStackKey : commonStackSets.keySet()) { + Integer[] intArrayType = new Integer[0]; + combinedStackLines.append(getMergedMemoryAllocatorStack(stackDepth+1, commonStackSets.get(commonStackKey).toArray(intArrayType), stackElements)).toString(); + } + + return combinedStackLines.toString(); + } + + /** + * This method combines the information from all unique call stacks to show all memory that was allocated + * and from which place. It is a different look at the same data, combining the call stacks to see their relation + * @param stackDepth the stack depth, start with 0 + * @param elementPositions pointers to the element positions + * @param stackElements the stack elements + * @throws IOException in case a stack trace has unexpected data + */ + private String getMergedBrkStack(int stackDepth, Integer[] elementPositions, List stackElements) throws IOException { + //all elements have the same stack up to here. + //System.out.println("combinedPrintStackDepth: depth:"+stackDepth+" elements:"+elementPositions.length); + + //check if we reached a leaf + if (elementPositions.length == 1) { + //this is a leaf + BrkStackOccurence leafStackEntry = stackElements.get(elementPositions[0]); + String[] leafStack = getCallStack(leafStackEntry.getStack()); + //create the formatted stack lines + StringBuffer stackLines = new StringBuffer(); + for (int i=stackDepth;i> commonStackSets = new HashMap>(); + for (int pos : elementPositions) { + + //examine all elements and split into sets that have the same stack at this level + BrkStackOccurence stackReportEntry = stackElements.get(pos); + String[] callStack = getCallStack(stackReportEntry.getStack()); + if (callStack.length < stackDepth+1) { + //this stack has reached its end. Should not happen? + throw new IOException("Found stack that does not have a next element:\n"+stackReportEntry+"\n"); + } else { + //OK we go in it has a next element + //get the next stack element + String currentStackElement = callStack[stackDepth+1]; + if (commonStackSets.containsKey(currentStackElement)) { + //System.out.println("common stack array contains key"); + commonStackSets.get(currentStackElement).add(pos); + } else { + //first one + //System.out.println("common stack array does not contain key"); + List elementPointers = new ArrayList(); + elementPointers.add(pos); + commonStackSets.put(currentStackElement, elementPointers); + } + } + } + + //recursively go through the next stack level + for (String commonStackKey : commonStackSets.keySet()) { + Integer[] intArrayType = new Integer[0]; + combinedStackLines.append(getMergedBrkStack(stackDepth+1, commonStackSets.get(commonStackKey).toArray(intArrayType), stackElements)).toString(); + } + + return combinedStackLines.toString(); + } + + /** + * Returns the callstack as a String array + * @param callstack the callstack with new lines as a single string + * @return the callstack as a string array + */ + private String[] getCallStack(String callstack) { + String[] stackEntries = callstack.split("\n"); + String[] reversedStackEntries = reverseStackEntries(stackEntries); + return reversedStackEntries; + } + + /** + * Simpy reverses the stack entries + * @param stackEntries + * @return + */ + private String[] reverseStackEntries (String[] stackEntries) { + String[] ret = new String[stackEntries.length]; + for (int i=0;i entryLines = new ArrayList(); + + String line; + boolean processingEntry = false; + while ((line = br.readLine()) != null) { + + if (line.contains(entryStartCharSequence)) { + //sanity check + if (processingEntry) { + throw new IOException("Trace file corrupted. Found char sequence:"+entryStartCharSequence+" while already processing trace entry. Current line:"+line); + } else { + //mark beginning of processing a new trace entry + processingEntry = true; + + //found start sequence + entryLines.add(line); + + //check if it is a single line + if (line.contains(entryEndCharSequence)) { + //mark end of processing entry + processingEntry = false; + return new MemoryAllocatorTraceEntry(entryLines); + } + } + } else { + //line does not contain start sequence + if (processingEntry) { + if (!line.trim().equals("")) { + //if we have a non-empty line, add it + entryLines.add(line); + } + } + + //check if contains the end sequence + if (line.contains(entryEndCharSequence)) { + //sanity check + if (!processingEntry) { + throw new IOException("Trace file corrupted. Found char sequence:"+entryEndCharSequence+" while not processing a trace entry. Current line:"+line); + } + + //mark end of processing entry + processingEntry = false; + return new MemoryAllocatorTraceEntry(entryLines); + } + } + + } + //reached end of file + return null; + } + + /** + * reads the next log entry from the file, for a generic file + * @param br + * @return + * @throws IOException + */ + public BrkTraceEntry readBrkTraceEntry(BufferedReader br) throws IOException{ + + List entryLines = new ArrayList(); + + String line; + boolean processingEntry = false; + while ((line = br.readLine()) != null) { + + if (line.contains(entryStartCharSequence)) { + //sanity check + if (processingEntry) { + throw new IOException("Trace file corrupted. Found char sequence:"+entryStartCharSequence+" while already processing trace entry. Current line:"+line); + } else { + //mark beginning of processing a new trace entry + processingEntry = true; + + //found start sequence + entryLines.add(line); + + //check if it is a single line + if (line.contains(entryEndCharSequence)) { + //mark end of processing entry + processingEntry = false; + return new BrkTraceEntry(entryLines); + } + } + } else { + //line does not contain start sequence + if (processingEntry) { + if (!line.trim().equals("")) { + //if we have a non-empty line, add it + entryLines.add(line); + } + } + + //check if contains the end sequence + if (line.contains(entryEndCharSequence)) { + //sanity check + if (!processingEntry) { + throw new IOException("Trace file corrupted. Found char sequence:"+entryEndCharSequence+" while not processing a trace entry. Current line:"+line); + } + + //mark end of processing entry + processingEntry = false; + return new BrkTraceEntry(entryLines); + } + } + + } + //reached end of file + return null; + } + + + /** + * reads the next log entry from the file (processed file) + * @param br + * @return + * @throws IOException + */ + public StackOccurence readProcessedTraceEntry(BufferedReader br) throws IOException{ + List entryLines = new ArrayList(); + + String line; + boolean processingEntry = false; + while ((line = br.readLine()) != null) { + //System.out.println("read entry: reading line:"+line); + if (line.trim().equals("") && (!processingEntry)) { + //go on next line + continue; + } else if (line.trim().equals("") && processingEntry) { + break; + } else { + if (line.trim().startsWith("==")) { + //found end of section + return null; + } else { + processingEntry = true; + entryLines.add(line.trim()); + } + } + } + + //convert entry & return + StringBuffer stackSB = new StringBuffer(); + Integer times = null; + + for (int i=0;i malloc + * @param stack the string that is the stack + * @return + */ + public static String clearTopLevelStackReturnPointer(String stack) { + StringBuffer ret = new StringBuffer(); + String[] stackLines = stack.split("\n"); + for (int i=0;i malloc + if (i != (stackLines.length-1)) { + ret.append("\n"); + } + } else { + if (!stackLines[i].isEmpty()) { + ret.append(stackLines[i]); + if (i != (stackLines.length-1)) { + ret.append("\n"); + } + } + } + } + return ret.toString(); + } + /** + * positions the reader on the first entry + * @param br the reader + * @throws IOException if it cannot position on the next entry + */ + private void positionNextEntryOnProcessedFile(BufferedReader br) throws IOException{ + String line; + + while ( (line = br.readLine() ) != null ) { + //System.out.println("position: reading line:"+line); + if (line.trim().equals("")) { + //go on next line + continue; + } else if (line.trim().startsWith("==")) { + break; + } else { + //wrong..... + throw new IOException("Cannot determine file position"); + } + } + } + + /** + * Prints the analysis information + */ + public void printAnalysisInformation(TraceFileType fileType) { + switch (fileType) { + + case MEMALLOC : { + //to plevel info + logMessage("Call statistics", true, writer); + logMessage("Found "+totalMallocCalls+" malloc calls", true, writer); + logMessage("Found "+totalCallocCalls+" calloc calls", true, writer); + logMessage("Found "+totalReallocCalls+" realloc calls", true, writer); + logMessage("Found "+totalFreeCalls+" free calls", true, writer); + //wrong delete stacks + logMessage("\nDouble free issues", true, writer); + logMessage("Found "+totalDoubleFreeStacks+" double free stacks in total", true, writer); + if (totalDoubleFreeStacks > 0) { + logMessage("Found "+uniqueDoubleFreeStacks.size()+" unique double free stacks", true, writer); + for (StackOccurence dFreeStack : uniqueDoubleFreeStacks) { + logMessage("Found double free stack "+dFreeStack.getTimesFound()+" times. Stack:\n"+dFreeStack.getStack()+"\n", false, writer); + } + + } + + logMessage("\nFree non-allocated memory issues (may also be potential memory leaks)", true, writer); + //free on unallocated memory (may be an issue, or not) + logMessage("Found "+totalPotentialWrongFreeSuspects+" stacks that freed memory that was not allocated during the period of the trace", true, writer); + logMessage("Found "+uniquePotentialWrongFreeStacks.size()+" unique stacks that freed memory that was not allocated during the period of the trace", true, writer); + logMessage("Found "+uniqueSuccessfulFreeStacks.size()+" unique stacks that correctly freed memory", true, writer); + logMessage("Found "+uniquePotentialWrongFreeStacksNeverCorrectlyFreed.size()+" unique stacks that have never been found to correctly free memory", true, writer); + logMessage("Suspected wrong free stacks\n",false, writer); + for (StackOccurence delUnallocatedStack : uniquePotentialWrongFreeStacks) { + logMessage("Suspected wrong free stack found "+delUnallocatedStack.getTimesFound()+" times",false, writer); + logMessage(delUnallocatedStack.getStack()+"\n\n",false, writer); + } + + logMessage("Strongly suspected wrong free stacks\n",false, writer); + for (StackOccurence delUnallocatedStack : uniquePotentialWrongFreeStacksNeverCorrectlyFreed) { + logMessage("Strongly suspected wrong free stack found "+delUnallocatedStack.getTimesFound()+" times",false, writer); + logMessage(delUnallocatedStack.getStack()+"\n\n",false, writer); + } + + //potential memory leaks + logMessage("\nMemory leak issues", true, writer); + logMessage("Found "+totalPoteltialLeakSuspects+" potential memory leaks in total", true, writer); + logMessage("Found "+uniquePotentialLeakStacks.size()+" unique potential memory leak stacks (suspects)", true, writer); + logMessage("Found "+uniqueSuccessfullyDeletedStacks.size()+" unique stacks that allocated memory that was correctly freed", true, writer); + logMessage("Found "+uniquePotentialLeakStacksNeverFreed.size()+" unique stacks that were never correctly deleted/freed (strong suspects)\n", true, writer); + + int totalUndeletedAllocations = 0; + //here we are showing the leak stacks based on their frequency + + for (StackOccurence suspectCallStack : uniquePotentialLeakStacks) { + logMessage("Suspect leak stack found "+suspectCallStack.getTimesFound()+" times",false, writer); + totalUndeletedAllocations += suspectCallStack.getTimesFound(); + logMessage(suspectCallStack.getStack()+"\n\n",false, writer); + } + + for (StackOccurence suspectCallStack : uniquePotentialLeakStacksNeverFreed) { + logMessage("Strongly suspect leak stack found "+suspectCallStack.getTimesFound()+" times",false, writer); + logMessage(suspectCallStack.getStack()+"\n\n",false, writer); + } + + if (totalUndeletedAllocations != totalPoteltialLeakSuspects) { + //total undeleted allocations + logMessage("(Warn) Found mispatch in counting total memory allocations that were not deleted. From pre-processing: "+totalPoteltialLeakSuspects+" from each individual stack count:"+totalUndeletedAllocations+"\n", true, writer); + } + + //combined stacks + if (!combinedLeakStackSuspects.isEmpty()) { + logMessage("Presenting memory leak suspects in a combined call stack\n", false, writer); + logMessage(combinedLeakStackSuspects, false, writer); + } + //combined stacks + if (!combinedLeakStackStrongSuspects.isEmpty()) { + logMessage("Presenting strong memory leak suspects in a combined call stack\n", false, writer); + logMessage(combinedLeakStackStrongSuspects, false, writer); + } + break; + } + + case BRK : { + logMessage("\nCall statistics\n", true, writer); + logMessage("Found "+totalBrkIncreaseStacks+" brk calls that increased the process virtual memory", true, writer); + logMessage("Found "+totalBrkDecreaseStacks+" brk calls that decreased the process virtual memory", true, writer); + logMessage("Found "+totalBrkNeutralStacks+" brk calls that were neutral in terms of memory", true, writer); + logMessage("Found "+totalBrkFailedStacks+" brk calls that failed", true, writer); + logMessage("Found in total "+uniqueBrkStacks.size()+" unique brk stacks", true, writer); + + if (totalBrkFailedStacks>0) { + logMessage("\n*** Failed brk calls (unsuccessful memory increase requests) ***\n", true, writer); + for (BrkStackOccurence failedBrkStacks : uniqueFailedBrkStacks) { + logMessage("Failed brk stack found "+failedBrkStacks.getTimesFound()+" times, total size:"+failedBrkStacks.getSizeIncrease(),false, writer); + logMessage(failedBrkStacks.getStack()+"\n\n",false, writer); + } + } + + logMessage("\n*** Unique brk call stacks ***\n", false, writer); + for (BrkStackOccurence failedBrkStacks : uniqueBrkStacks) { + logMessage("Unique brk stack found "+failedBrkStacks.getTimesFound()+" times, total size:"+failedBrkStacks.getSizeIncrease(),false, writer); + logMessage(failedBrkStacks.getStack()+"\n\n",false, writer); + } + + //combined stack + logMessage("Presenting brk stacks in a combined call stack\n", false, writer); + logMessage(combinedBrkStacks, false, writer); + + break; + } + + default : { + } + } + + writer.close(); + } + + + + + + /** + * Prints a combined analysis results from a set of results files + * @param fileAnalysisResults the map with the files and their analysis results + * @param fileOut the output file to be used + * @throws UnsupportedEncodingException + * @throws FileNotFoundException + */ + public static void printMemoryAllocatorCombinedAnalysisResults(Map fileAnalysisResults, String fileOut) throws FileNotFoundException, UnsupportedEncodingException { + PrintWriter combinedFileWrite = new PrintWriter(fileOut, "UTF-8"); + + File[] files = fileAnalysisResults.keySet().toArray(new File[]{}); + Arrays.sort(files, new Comparator() { + @Override + public int compare(File o1, File o2) { + return o1.getName().compareTo(o2.getName()); + } + + }); + + StringBuffer fileNamesSb = new StringBuffer(); + for (int i=0;i entryLines = new ArrayList(); + public static void printProcessedFilesCombinedAnalysisResults(Map fileAnalysisResults, String fileOut, boolean printAllocDeallocStacks) throws FileNotFoundException, UnsupportedEncodingException { + PrintWriter combinedFileWrite = new PrintWriter(fileOut, "UTF-8"); - String line; - boolean processingEntry = false; - while ((line = br.readLine()) != null) { + File[] files = fileAnalysisResults.keySet().toArray(new File[]{}); + Arrays.sort(files, new Comparator() { + @Override + public int compare(File o1, File o2) { + return o1.getName().compareTo(o2.getName()); + } - if (line.contains(entryStartCharSequence)) { - //sanity check - if (processingEntry) { - throw new IOException("Trace file corrupted. Found char sequence:"+entryStartCharSequence+" while already processing trace entry. Current line:"+line); - } else { - //mark beginning of processing a new trace entry - processingEntry = true; - - //found start sequence - entryLines.add(line); - - //check if it is a single line - if (line.contains(entryEndCharSequence)) { - //mark end of processing entry - processingEntry = false; - return new DTLeakLogEntry(entryLines); + }); + + StringBuffer fileNamesSb = new StringBuffer(); + for (int i=0;i - */ - public static void main(String[] args) throws IOException{ - if (args.length != 2) { - System.out.println("arguments: " ); - return; - } - - DTLeakAnalyzer dtLeakAnalyzer = new DTLeakAnalyzer(args[0], args[1]); - dtLeakAnalyzer.performAnalysis(); - } - /** * Log entry class - * holds information about the log entry + * holds information about the log entry for a generic * * @author Petros Pissias * */ - public class DTLeakLogEntry { + public static class MemoryAllocatorTraceEntry { private final long entryNumber; private final String date; - private final DTLeakLogEntryType type; + private final MemoryAllocationTraceEntryType type; private final String threadId; - private final String address; - private final String additionalInfo; + private final String address; + private final long size; + private final String previousAddress; private final String callStack; - public DTLeakLogEntry(List lines) throws IOException { + public MemoryAllocatorTraceEntry(List lines) throws IOException { if (lines.size() == 0) { throw new IOException("Empty trace entry requested"); } else { @@ -341,14 +2135,14 @@ public DTLeakLogEntry(List lines) throws IOException { } //determine log type - if (lineFields[3].equals("new") || lineFields[3].equals("malloc")) { - type = DTLeakLogEntryType.NEW; - } else if (lineFields[3].equals("new[]")) { - type = DTLeakLogEntryType.NEWARRAY; - } else if (lineFields[3].equals("delete") || lineFields[3].equals("free")) { - type = DTLeakLogEntryType.DELETE; - } else if (lineFields[3].equals("delete[]")) { - type = DTLeakLogEntryType.DELETEARRAY; + if (lineFields[3].equals("malloc")) { + type = MemoryAllocationTraceEntryType.MALLOC; + } else if (lineFields[3].equals("calloc")) { + type = MemoryAllocationTraceEntryType.CALLOC; + } else if (lineFields[3].equals("free")) { + type = MemoryAllocationTraceEntryType.FREE; + } else if (lineFields[3].equals("realloc")) { + type = MemoryAllocationTraceEntryType.REALLOC; } else { //do not understand throw new IOException("cannot decode line:"+firstLine); @@ -361,27 +2155,169 @@ public DTLeakLogEntry(List lines) throws IOException { //now special handling switch (type) { - case DELETE : { + case MALLOC : { address = lineFields[4]; - additionalInfo = null; + size = Long.parseLong(lineFields[5]); + previousAddress = null; break; } - case DELETEARRAY : { + case CALLOC : { address = lineFields[4]; - additionalInfo = null; + size = Long.parseLong(lineFields[5]); + previousAddress = null; break; } - case NEW : { + case REALLOC : { + address = lineFields[5]; + size = Long.parseLong(lineFields[6]); + previousAddress = lineFields[4]; + break; + } + + case FREE : { + address = lineFields[4]; + size=0; + previousAddress = null; + break; + } + + default : { + throw new IOException("cannot determine type:"+type.name()); + } + } + + //get call stack + StringBuffer sb = new StringBuffer(); + for (int i=1;i", "").trim(); + if (!trimmedLine.equals("")) { + sb.append(trimmedLine).append("\n"); + } + } + if (sb.toString().isEmpty()) { + callStack = null; + } else { + callStack = clearTopLevelStackReturnPointer(sb.toString()); + } + + } + } + + public long getEntryNumber() { + return entryNumber; + } + + public String getDate() { + return date; + } + + public MemoryAllocationTraceEntryType getType() { + return type; + } + + public String getThreadId() { + return threadId; + } + + public String getAddress() { + return address; + } + + public long getSize() { + return size; + } + + public String getPreviousAddress() { + return previousAddress; + } + + public String getCallStack() { + return callStack; + } + + @Override + public String toString() { + return "DTGenericLeakLogEntry [entryNumber=" + entryNumber + + ", date=" + date + ", type=" + type + ", threadId=" + + threadId + ", address=" + address + ", size=" + size + + ", previousAddress=" + previousAddress + ", callStack=" + + callStack + "]"; + } + + + } + + /** + * Log entry class + * holds information about the log entry for a generic + * + * @author Petros Pissias + * + */ + public static class BrkTraceEntry { + private final long entryNumber; + private final String date; + private final BrkTraceEntryType type; + private final String threadId; + private final String address; + private final long size; + private final boolean success; + private final String callStack; + + public BrkTraceEntry(List lines) throws IOException { + if (lines.size() == 0) { + throw new IOException("Empty trace entry requested"); + } else { + String firstLine = lines.get(0).replaceAll("<__", "").replaceAll("__>", ""); + String[] lineFields = firstLine.split(";"); + + if (lineFields.length < 6) { + throw new IOException("cannot decode line:"+firstLine); + } + + //determine log type + if (lineFields[3].equals("brk")) { + type = BrkTraceEntryType.BRK; + } else if (lineFields[3].equals("sbrk")) { + type = BrkTraceEntryType.SBRK; + } else { + //do not understand + throw new IOException("cannot decode line:"+firstLine); + } + + //get data + entryNumber = Long.parseLong(lineFields[0]); + date = lineFields[1]; + threadId = lineFields[2]; + + //now special handling + switch (type) { + case BRK : { address = lineFields[4]; - additionalInfo = lineFields[5]; + size = -1; + int brkReturn = Integer.parseInt(lineFields[5]); + if (brkReturn != -1) { + //success + success = true; + } else { + success = false; + } + break; } - case NEWARRAY : { - address = null; - additionalInfo = lineFields[5]; + case SBRK : { + address = lineFields[4]; + size = Long.parseLong(lineFields[5]); + + if (!address.equals("-0x1")) { + //success + success = true; + } else { + success = false; + } break; } @@ -401,7 +2337,7 @@ public DTLeakLogEntry(List lines) throws IOException { if (sb.toString().isEmpty()) { callStack = null; } else { - callStack = sb.toString(); + callStack = clearTopLevelStackReturnPointer(sb.toString()); } } @@ -415,7 +2351,7 @@ public String getDate() { return date; } - public DTLeakLogEntryType getType() { + public BrkTraceEntryType getType() { return type; } @@ -427,8 +2363,12 @@ public String getAddress() { return address; } - public String getAdditionalInfo() { - return additionalInfo; + public long getSize() { + return size; + } + + public boolean isSuccess() { + return success; } public String getCallStack() { @@ -437,15 +2377,15 @@ public String getCallStack() { @Override public String toString() { - return "DTLeakLogEntry [entryNumber=" + entryNumber + ", date=" + return "DTLeakBrkLogEntry [entryNumber=" + entryNumber + ", date=" + date + ", type=" + type + ", threadId=" + threadId - + ", address=" + address + ", additionalInfo=" - + additionalInfo + ", callStack=" + callStack + "]"; + + ", address=" + address + ", size=" + size + ", success=" + + success + ", callStack=" + callStack + "]"; } } - + /** * Class that holds informatoin about how many times * a specific call stack allocated memory that was not deleted @@ -453,15 +2393,20 @@ public String toString() { * @author Petros Pissias * */ - public class DTLeakReportEntry { + public static class StackOccurence { private final String stack; private volatile int timesFound; - public DTLeakReportEntry(String stack, int times) { + public StackOccurence(String stack, int times) { this.stack = stack; this.timesFound = times; } + public StackOccurence(String stack) { + this.stack = stack; + this.timesFound = 1; + } + public String getStack() { return stack; } @@ -474,72 +2419,63 @@ public void increaseTimesFound() { timesFound++; } - } - - /** - * Class that holds information about the wrong delete entries - * @author Petros Pissias - * - */ - public class DTLeakWrongDeleteEntry { - private final DTLeakLogEntry deleteLogEntry; - private final DTLeakLogEntry arrayAllocationLogEntry; - public DTLeakWrongDeleteEntry(DTLeakLogEntry deleteLogEntry, - DTLeakLogEntry arrayAllocationLogEntry) { - super(); - this.deleteLogEntry = deleteLogEntry; - this.arrayAllocationLogEntry = arrayAllocationLogEntry; - } - public DTLeakLogEntry getDeleteLogEntry() { - return deleteLogEntry; - } - public DTLeakLogEntry getArrayAllocationLogEntry() { - return arrayAllocationLogEntry; + public void increaseTimesFound(int amount) { + timesFound+=amount;; } + public String getInformation() { + return "Found "+timesFound+" times"; + } } - /** * Class that holds informatoin about how many times - * a specific wrong delete was performed + * a specific call stack allocated memory that was not deleted * * @author Petros Pissias * */ - public class DTLeakWrongDeleteReportEntry { - - private final String stack; - private volatile int timesFound; - private final DTLeakWrongDeleteEntry example; + public static class BrkStackOccurence extends StackOccurence{ + private volatile long sizeIncrease; - public DTLeakWrongDeleteReportEntry(String stack, int times, DTLeakWrongDeleteEntry example) { - this.stack = stack; - this.timesFound = times; - this.example = example; - } - - public String getStack() { - return stack; + public BrkStackOccurence(String stack, int times, long sizeIncrease) { + super(stack, times); + this.sizeIncrease = sizeIncrease; } - public int getTimesFound() { - return timesFound; + public BrkStackOccurence(String stack, long sizeIncrease) { + super(stack); + this.sizeIncrease = sizeIncrease; } - - public DTLeakWrongDeleteEntry getExample() { - return example; - } - - public void increaseTimesFound() { - timesFound++; + + public long getSizeIncrease() { + return sizeIncrease; } - } + public void increaseSize(long size) { + sizeIncrease+=size; + } + + public String getInformation() { + return super.getInformation()+", overall size increase: "+sizeIncrease+" bytes"; + } + } + + + public static enum MemoryAllocationTraceEntryType { + MALLOC, + CALLOC, + REALLOC, + FREE + } + + public static enum BrkTraceEntryType { + BRK, + SBRK, + } - public enum DTLeakLogEntryType { - NEW, - NEWARRAY, - DELETE, - DELETEARRAY + public static enum TraceFileType { + MEMALLOC, + BRK } + } diff --git a/test_run.bat b/test_run.bat deleted file mode 100644 index 821498b..0000000 --- a/test_run.bat +++ /dev/null @@ -1 +0,0 @@ -java -jar dtleakanalyzer.jar resources/test_files/dtrace.array.log.mlg.int.6.8.50.2.dem resources/test_files/dtrace.array.log.mlg.int.6.8.50.2.dem.report \ No newline at end of file diff --git a/test_run.sh b/test_run.sh deleted file mode 100644 index 821498b..0000000 --- a/test_run.sh +++ /dev/null @@ -1 +0,0 @@ -java -jar dtleakanalyzer.jar resources/test_files/dtrace.array.log.mlg.int.6.8.50.2.dem resources/test_files/dtrace.array.log.mlg.int.6.8.50.2.dem.report \ No newline at end of file