This chapter describes how the execution of Objective Caml programs can be profiled, by recording how many times functions are called, branches of conditionals are taken, …
Before profiling an execution, the program must be compiled in profiling mode, using the ocamlcp front-end to the ocamlc compiler (see chapter 8). When compiling modules separately, ocamlcp must be used when compiling the modules (production of .cmo files), and can also be used (though this is not strictly necessary) when linking them together.
If a module (.ml file) doesn’t have a corresponding interface (.mli file), then compiling it with ocamlcp will produce object files (.cmi and .cmo) that are not compatible with the ones produced by ocamlc, which may lead to problems (if the .cmi or .cmo is still around) when switching between profiling and non-profiling compilations. To avoid this problem, you should always have a .mli file for each .ml file.
To make sure your programs can be compiled in profiling mode, avoid using any identifier that begins with __ocaml_prof.
The amount of profiling information can be controlled through the -p option to ocamlcp, followed by one or several letters indicating which parts of the program should be profiled:
For instance, compiling with ocamlcp -p film profiles function calls, if…then…else…, loops and pattern matching.
Calling ocamlcp without the -p option defaults to -p fm, meaning that only function calls and pattern matching are profiled.
Note: Due to the implementation of streams and stream patterns as syntactic sugar, it is hard to predict what parts of stream expressions and patterns will be profiled by a given flag. To profile a program with streams, we recommend using ocamlcp -p a.
Running a bytecode executable file that has been compiled with ocamlcp records the execution counts for the specified parts of the program and saves them in a file called ocamlprof.dump in the current directory.
If the environment variable OCAMLPROF_DUMP is set when the program exits, its value is used as the file name instead of ocamlprof.dump.
The dump file is written only if the program terminates normally (by calling exit or by falling through). It is not written if the program terminates with an uncaught exception.
If a compatible dump file already exists in the current directory, then the profiling information is accumulated in this dump file. This allows, for instance, the profiling of several executions of a program on different inputs.
The ocamlprof command produces a source listing of the program modules where execution counts have been inserted as comments. For instance,
ocamlprof foo.ml
prints the source code for the foo module, with comments indicating how many times the functions in this module have been called. Naturally, this information is accurate only if the source file has not been modified since the profiling execution took place.
The following options are recognized by ocamlprof:
Profiling with ocamlprof only records execution counts, not the actual time spent into each function. There is currently no way to perform time profiling on bytecode programs generated by ocamlc.
Native-code programs generated by ocamlopt can be profiled for time and execution counts using the -p option and the standard Unix profiler gprof. Just add the -p option when compiling and linking the program:
ocamlopt -o myprog -p other-options files ./myprog gprof myprog
Caml function names in the output of gprof have the following format:
Module-name_function-name_unique-number
Other functions shown are either parts of the Caml run-time system or external C functions linked with the program.
The output of gprof is described in the Unix manual page for gprof(1). It generally consists of two parts: a “flat” profile showing the time spent in each function and the number of invocation of each function, and a “hierarchical” profile based on the call graph. Currently, only the Intel x86/Linux and Alpha/Digital Unix ports of ocamlopt support the two profiles. On other platforms, gprof will report only the “flat” profile with just time information. When reading the output of gprof, keep in mind that the accumulated times computed by gprof are based on heuristics and may not be exact.