blob: 94329cfe6170cbc08248bbb539e5709357dd0cc3 [file] [log] [blame]
@c Copyright (C) 1988-2026 Free Software Foundation, Inc.
@c This is part of the GCC manual.
@c For copying conditions, see the file gcc.texi.
@node Parameters
@chapter Parameters
@cindex parameters
@cindex command-line parameters
In some places, GCC uses parameters that are settable from the command line
instead of arbitrary hard-wired constants. Many of these parameters
control the amount of optimization that is done. For example, GCC does not
inline functions that contain more than a certain number of instructions.
You can control these parameters with the @option{--param} command-line option:
@opindex param
@example
--param @var{name}=@var{value}
--param=@var{name}=@var{value}
@end example
The names of specific parameters, and the meaning of the values, are
tied to the internals of the compiler, and are subject to change
without notice in future releases. Not all parameters are documented.
To get a list of parameters supported by GCC, use the
@option{--help=params} option.
The @var{value} is an integer. In order to get the minimal, maximal
and default values of a parameter, use the @option{--help=param -Q}
options.
@menu
* General Parameters:: Parameters recognized on all targets.
* Target-Specific Parameters:: Parameters for specific targets.
@end menu
@node General Parameters
@section General Parameters
The following choices of @var{name} are recognized for all targets:
@table @gcctabopt
@paindex auto-profile-bbs
@item auto-profile-bbs
If non-zero and used together with @option{-fauto-profile}, the auto-profile
is used to determine basic block profile. If zero, then only function
level profile is read.
@paindex auto-profile-reorder-only
@item auto-profile-reorder-only
Enable only function reordering with auto-profile.
@paindex phiopt-factor-max-stmts-live
@item phiopt-factor-max-stmts-live
When factoring statements out of if/then/else, this is the maximum number
of statements
after the defining statement to be allowed to extend the lifetime of a name.
@paindex predictable-branch-outcome
@item predictable-branch-outcome
When branch is predicted to be taken with probability lower than this threshold
(as a percentage), then it is considered well-predictable.
@paindex max-rtl-if-conversion-insns
@item max-rtl-if-conversion-insns
RTL if-conversion tries to remove conditional branches around a block and
replace them with conditionally executed instructions. This parameter
gives the maximum number of instructions in a block that should be
considered for if-conversion. The compiler
also uses other heuristics to decide whether if-conversion is likely to be
profitable.
@paindex file-cache-files
@item file-cache-files
The maximum number of files in the file cache.
The file cache is used to print source lines in diagnostics and do some
source checks like @option{-Wmisleading-indentation}.
@paindex file-cache-lines
@item file-cache-lines
Maximum number of lines to index into file cache.
When zero this is automatically sized.
The file cache is used to print source lines in diagnostics and do some
source checks like @option{-Wmisleading-indentation}.
@paindex max-rtl-if-conversion-predictable-cost
@item max-rtl-if-conversion-predictable-cost
RTL if-conversion tries to remove conditional branches around a block
and replace them with conditionally executed instructions. These parameters
give the maximum permissible cost for the sequence that would be generated
by if-conversion depending on whether the branch is statically determined
to be predictable or not. The units for this parameter are the same as
those for the GCC internal seq_cost metric. The compiler tries to
provide a reasonable default for this parameter using the @code{BRANCH_COST}
target macro.
@paindex max-crossjump-edges
@item max-crossjump-edges
The maximum number of incoming edges to consider for cross-jumping.
The algorithm used by @option{-fcrossjumping} is @math{O(N^2)} in
the number of edges incoming to each block. Increasing values mean
more aggressive optimization, making the compilation time increase with
probably small improvement in executable size.
@paindex min-crossjump-insns
@item min-crossjump-insns
The minimum number of instructions that must be matched at the end
of two blocks before cross-jumping is performed on them. This
value is ignored in the case where all instructions in the block being
cross-jumped from are matched.
@paindex max-grow-copy-bb-insns
@item max-grow-copy-bb-insns
The maximum code size expansion factor when copying basic blocks
instead of jumping. The expansion is relative to a jump instruction.
@paindex max-goto-duplication-insns
@item max-goto-duplication-insns
The maximum number of instructions to duplicate to a block that jumps
to a computed goto. To avoid @math{O(N^2)} behavior in a number of
passes, GCC factors computed gotos early in the compilation process,
and unfactors them as late as possible. Only computed jumps at the
end of a basic blocks with no more than @code{max-goto-duplication-insns} are
unfactored.
@paindex max-delay-slot-insn-search
@item max-delay-slot-insn-search
The maximum number of instructions to consider when looking for an
instruction to fill a delay slot. If more than this arbitrary number of
instructions are searched, the time savings from filling the delay slot
are minimal, so stop searching. Increasing values mean more
aggressive optimization, making the compilation time increase with probably
small improvement in execution time.
@paindex max-delay-slot-live-search
@item max-delay-slot-live-search
When trying to fill delay slots, the maximum number of instructions to
consider when searching for a block with valid live register
information. Increasing this arbitrarily-chosen value means more
aggressive optimization, increasing the compilation time. This parameter
should be removed when the delay slot code is rewritten to maintain the
control-flow graph.
@paindex max-devirt-targets
@item max-devirt-targets
This limits number of function a virtual call may be speculatively
devirtualized to using static analysis (without profile feedback).
@paindex max-gcse-memory
@item max-gcse-memory
The approximate maximum amount of memory in @code{kB} that can be allocated in
order to perform the global common subexpression elimination
optimization. If more memory than specified is required, the
optimization is not done.
@paindex max-gcse-insertion-ratio
@item max-gcse-insertion-ratio
If the ratio of expression insertions to deletions is larger than this value
for any expression, then RTL PRE inserts or removes the expression and thus
leaves partially redundant computations in the instruction stream.
@paindex max-pending-list-length
@item max-pending-list-length
The maximum number of pending dependencies scheduling allows
before flushing the current state and starting over. Large functions
with few branches or calls can create excessively large lists which
needlessly consume memory and resources.
@paindex max-modulo-backtrack-attempts
@item max-modulo-backtrack-attempts
The maximum number of backtrack attempts the scheduler should make
when modulo scheduling a loop. Larger values can exponentially increase
compilation time.
@paindex max-inline-functions-called-once-loop-depth
@item max-inline-functions-called-once-loop-depth
Maximal loop depth of a call considered by inline heuristics that tries to
inline all functions called once.
@paindex max-inline-functions-called-once-insns
@item max-inline-functions-called-once-insns
Maximal estimated size of functions produced while inlining functions called
once.
@paindex max-inline-insns-single
@item max-inline-insns-single
Several parameters control the tree inliner used in GCC@. This number sets the
maximum number of instructions (counted in GCC's internal representation) in a
single function that the tree inliner considers for inlining. This only
affects functions declared inline and methods implemented in a class
declaration (C++).
@paindex max-inline-insns-auto
@item max-inline-insns-auto
When you use @option{-finline-functions} (included in @option{-O3}),
a lot of functions that would otherwise not be considered for inlining
by the compiler are investigated. To those functions, a different
(more restrictive) limit compared to functions declared inline can
be applied (@option{--param max-inline-insns-auto}).
@paindex max-inline-insns-small
@item max-inline-insns-small
This is the bound applied to calls that are considered relevant with
@option{-finline-small-functions}.
@paindex max-inline-insns-size
@item max-inline-insns-size
This is the bound applied to calls that are optimized for size. Small growth
may be desirable to anticipate optimization opportunities exposed by inlining.
@paindex uninlined-function-insns
@item uninlined-function-insns
Number of instructions accounted by inliner for function overhead such as
function prologue and epilogue.
@paindex uninlined-function-time
@item uninlined-function-time
Extra time accounted by inliner for function overhead such as time needed to
execute function prologue and epilogue.
@paindex inline-heuristics-hint-percent
@item inline-heuristics-hint-percent
The scale (as a percentage) applied to @option{inline-insns-single},
@option{inline-insns-single-O2}, @option{inline-insns-auto}
when inline heuristics hint that inlining is
very profitable. It enables later optimizations.
@paindex uninlined-thunk-insns
@item uninlined-thunk-insns
@paindex uninlined-thunk-time
@item uninlined-thunk-time
Same as @option{--param uninlined-function-insns} and
@option{--param uninlined-function-time}, but applied to function thunks.
@paindex inline-min-speedup
@item inline-min-speedup
When estimated performance improvement of caller + callee runtime exceeds this
threshold (as a percentage),
the function can be inlined regardless of the limit on
@option{--param max-inline-insns-single} and @option{--param
max-inline-insns-auto}.
@paindex large-function-insns
@item large-function-insns
The limit specifying really large functions. For functions larger than this
limit after inlining, inlining is constrained by
@option{--param large-function-growth}. This parameter is useful primarily
to avoid extreme compilation time caused by non-linear algorithms used by the
back end.
@paindex large-function-growth
@item large-function-growth
Specifies maximal growth of large functions caused by inlining,
as a percentage.
For example, a parameter value of 100 limits large function growth to 2.0 times
the original size.
@paindex large-unit-insns
@item large-unit-insns
The limit specifying large translation unit. Growth caused by inlining of
units larger than this limit is limited by @option{--param inline-unit-growth}.
For small units this might be too tight.
For example, consider a unit consisting of function A
that is inline and B that just calls A three times. If B is small relative to
A, the growth of unit is 300\% and yet such inlining is very sane. For very
large units consisting of small inlineable functions, however, the overall unit
growth limit is needed to avoid exponential explosion of code size. Thus for
smaller units, the size is increased to @option{--param large-unit-insns}
before applying @option{--param inline-unit-growth}.
@paindex lazy-modules
@item lazy-modules
Maximum number of concurrently open C++ module files when lazy loading.
@paindex inline-unit-growth
@item inline-unit-growth
Specifies maximal overall growth of the compilation unit caused by inlining.
For example, parameter value 20 limits unit growth to 1.2 times the original
size. Cold functions (either marked cold via an attribute or by profile
feedback) are not accounted into the unit size.
@paindex ipa-cp-unit-growth
@item ipa-cp-unit-growth
Specifies maximal overall growth of the compilation unit caused by
interprocedural constant propagation. For example, parameter value 10 limits
unit growth to 1.1 times the original size.
@paindex ipa-cp-large-unit-insns
@item ipa-cp-large-unit-insns
The size of translation unit that IPA-CP pass considers large.
@paindex large-stack-frame
@item large-stack-frame
The limit specifying large stack frames. While inlining, the algorithm tries
to not grow past this limit too much.
@paindex large-stack-frame-growth
@item large-stack-frame-growth
Specifies maximal growth of large stack frames caused by inlining,
as a percentage of the original size.
For example, parameter value 1000 limits large stack frame growth to 11 times
the original size.
@paindex max-inline-insns-recursive
@paindex max-inline-insns-recursive-auto
@item max-inline-insns-recursive
@itemx max-inline-insns-recursive-auto
Specifies the maximum number of instructions an out-of-line copy of a
self-recursive inline
function can grow into by performing recursive inlining.
@option{--param max-inline-insns-recursive} applies to functions
declared inline.
For functions not declared inline, recursive inlining
happens only when @option{-finline-functions} (included in @option{-O3}) is
enabled; @option{--param max-inline-insns-recursive-auto} applies instead.
@paindex max-inline-recursive-depth
@paindex max-inline-recursive-depth-auto
@item max-inline-recursive-depth
@itemx max-inline-recursive-depth-auto
Specifies the maximum recursion depth used for recursive inlining.
@option{--param max-inline-recursive-depth} applies to functions
declared inline. For functions not declared inline, recursive inlining
happens only when @option{-finline-functions} (included in @option{-O3}) is
enabled; @option{--param max-inline-recursive-depth-auto} applies instead.
@paindex min-inline-recursive-probability
@item min-inline-recursive-probability
Recursive inlining is profitable only for functions having deep recursion
in average and can hurt for functions having little recursion depth by
increasing the prologue size or complexity of function body to other
optimizers.
When profile feedback is available (see @option{-fprofile-generate}),
the actual
recursion depth can be guessed from the probability that function recurses
via a given call expression. This parameter limits inlining only to call
expressions whose probability exceeds the given threshold (as a percentage).
@paindex early-inlining-insns
@item early-inlining-insns
Specify growth that the early inliner can make. In effect it increases
the amount of inlining for code having a large abstraction penalty.
@paindex max-early-inliner-iterations
@item max-early-inliner-iterations
Limit of iterations of the early inliner. This basically bounds
the number of nested indirect calls the early inliner can resolve.
Deeper chains are still handled by late inlining.
@paindex comdat-sharing-probability
@item comdat-sharing-probability
Probability (as a percentage) that C++ inline function with comdat visibility
are shared across multiple compilation units.
@paindex modref-max-bases
@item modref-max-bases
@paindex modref-max-refs
@item modref-max-refs
@paindex modref-max-accesses
@item modref-max-accesses
Specifies the maximal number of base pointers, references and accesses stored
for a single function by mod/ref analysis.
@paindex modref-max-tests
@item modref-max-tests
Specifies the maxmal number of tests alias oracle can perform to disambiguate
memory locations using the mod/ref information. This parameter ought to be
bigger than @option{--param modref-max-bases} and @option{--param
modref-max-refs}.
@paindex modref-max-depth
@item modref-max-depth
Specifies the maximum depth of DFS walk used by modref escape analysis.
Setting to 0 disables the analysis completely.
@paindex modref-max-escape-points
@item modref-max-escape-points
Specifies the maximum number of escape points tracked by modref per SSA-name.
@paindex modref-max-adjustments
@item modref-max-adjustments
Specifies the maximum number times the access range is enlarged during
modref dataflow analysis.
@paindex profile-func-internal-id
@item profile-func-internal-id
A parameter to control whether to use function internal id in profile
database lookup. If the value is 0, the compiler uses an id that
is based on function assembler name and filename, which makes old profile
data more tolerant to source changes such as function reordering etc.
@paindex min-vect-loop-bound
@item min-vect-loop-bound
The minimum number of iterations under which loops are not vectorized
when @option{-ftree-vectorize} is used. The number of iterations after
vectorization needs to be greater than the value specified by this option
to allow vectorization.
@paindex gcse-cost-distance-ratio
@item gcse-cost-distance-ratio
Scaling factor in calculation of maximum distance an expression
can be moved by GCSE optimizations. This is currently supported only in the
code hoisting pass. The bigger the ratio, the more aggressive code hoisting
is with simple expressions, i.e., the expressions that have cost
less than @option{gcse-unrestricted-cost}. Specifying 0 disables
hoisting of simple expressions.
@paindex gcse-unrestricted-cost
@item gcse-unrestricted-cost
Cost, roughly measured as the cost of a single typical machine
instruction, at which GCSE optimizations do not constrain
the distance an expression can travel. This is currently
supported only in the code hoisting pass. The lesser the cost,
the more aggressive code hoisting is. Specifying 0
allows all expressions to travel unrestricted distances.
@paindex max-hoist-depth
@item max-hoist-depth
The depth of search in the dominator tree for expressions to hoist.
This is used to avoid quadratic behavior in hoisting algorithm.
The value of 0 does not limit on the search, but may slow down compilation
of huge functions.
@paindex max-tail-merge-comparisons
@item max-tail-merge-comparisons
The maximum number of similar bbs to compare a bb with. This is used to
avoid quadratic behavior in tree tail merging.
@paindex max-tail-merge-iterations
@item max-tail-merge-iterations
The maximum number of iterations of the tree tail merging pass over a function.
This is used to limit compilation time in this pass.
@paindex store-merging-allow-unaligned
@item store-merging-allow-unaligned
Allow the store merging pass to introduce unaligned stores if it is legal to
do so.
@paindex max-stores-to-merge
@item max-stores-to-merge
The maximum number of stores to attempt to merge into wider stores in the store
merging pass.
@paindex max-store-chains-to-track
@item max-store-chains-to-track
The maximum number of store chains to track at the same time in the attempt
to merge them into wider stores in the store merging pass.
@paindex max-stores-to-track
@item max-stores-to-track
The maximum number of stores to track at the same time in the attempt to
to merge them into wider stores in the store merging pass.
@paindex max-unrolled-insns
@item max-unrolled-insns
The maximum number of instructions that a loop may have to be unrolled.
If a loop is unrolled, this parameter also determines how many times
the loop code is unrolled.
@paindex max-average-unrolled-insns
@item max-average-unrolled-insns
The maximum number of instructions biased by probabilities of their execution
that a loop may have to be unrolled. If a loop is unrolled,
this parameter also determines how many times the loop code is unrolled.
@paindex max-unroll-times
@item max-unroll-times
The maximum number of unrollings of a single loop.
@paindex max-peeled-insns
@item max-peeled-insns
The maximum number of instructions that a loop may have to be peeled.
If a loop is peeled, this parameter also determines how many times
the loop code is peeled.
@paindex max-peel-times
@item max-peel-times
The maximum number of peelings of a single loop.
@paindex max-peel-branches
@item max-peel-branches
The maximum number of branches on the hot path through the peeled sequence.
@paindex max-completely-peeled-insns
@item max-completely-peeled-insns
The maximum number of insns of a completely peeled loop.
@paindex max-completely-peel-times
@item max-completely-peel-times
The maximum number of iterations of a loop to be suitable for complete peeling.
@paindex max-completely-peel-loop-nest-depth
@item max-completely-peel-loop-nest-depth
The maximum depth of a loop nest suitable for complete peeling.
@paindex max-unswitch-insns
@item max-unswitch-insns
The maximum number of insns of an unswitched loop.
@paindex max-unswitch-depth
@item max-unswitch-depth
The maximum depth of a loop nest to be unswitched.
@paindex lim-expensive
@item lim-expensive
The minimum cost of an expensive expression in the loop invariant motion pass.
@paindex min-loop-cond-split-prob
@item min-loop-cond-split-prob
When FDO profile information is available, @option{min-loop-cond-split-prob}
specifies the minimum threshold for probability of semi-invariant condition
statement to trigger loop split. The value is a percentage.
@paindex iv-consider-all-candidates-bound
@item iv-consider-all-candidates-bound
Bound on number of candidates for induction variables, below which
all candidates are considered for each use in induction variable
optimizations. If there are more candidates than this,
only the most relevant ones are considered to avoid quadratic time complexity.
@paindex iv-max-considered-uses
@item iv-max-considered-uses
The induction variable optimizations give up on loops that contain more
induction variable uses than this limit.
@paindex iv-always-prune-cand-set-bound
@item iv-always-prune-cand-set-bound
This parameter is used by induction variable optimization.
If the number of candidates in the iv set is larger than this value,
always try to remove unnecessary ivs from the set
when adding a new one.
@paindex avg-loop-niter
@item avg-loop-niter
Average number of iterations of a loop.
@paindex dse-max-object-size
@item dse-max-object-size
Maximum size (in bytes) of objects tracked bytewise by dead store elimination.
Larger values may result in larger compilation times.
@paindex dse-max-alias-queries-per-store
@item dse-max-alias-queries-per-store
Maximum number of queries into the alias oracle per store.
Larger values result in larger compilation times and may result in more
removed dead stores.
@paindex scev-max-expr-size
@item scev-max-expr-size
Bound on size of expressions used in the scalar evolutions analyzer.
Large expressions slow the analyzer.
@paindex scev-max-expr-complexity
@item scev-max-expr-complexity
Bound on the complexity of the expressions in the scalar evolutions analyzer.
Complex expressions slow the analyzer.
@paindex max-tree-if-conversion-phi-args
@item max-tree-if-conversion-phi-args
Maximum number of arguments in a PHI supported by TREE if conversion
unless the loop is marked with a simd pragma.
@paindex vect-max-layout-candidates
@item vect-max-layout-candidates
The maximum number of possible vector layouts (such as permutations)
to consider when optimizing to-be-vectorized code.
@paindex vect-max-version-for-alignment-checks
@item vect-max-version-for-alignment-checks
The maximum number of run-time checks that can be performed when
doing loop versioning for alignment in the vectorizer.
@paindex vect-max-version-for-alias-checks
@item vect-max-version-for-alias-checks
The maximum number of run-time checks that can be performed when
doing loop versioning for alias in the vectorizer.
@paindex vect-max-peeling-for-alignment
@item vect-max-peeling-for-alignment
The maximum number of loop peels to enhance access alignment
for vectorizer. Value -1 means no limit.
@paindex max-iterations-to-track
@item max-iterations-to-track
The maximum number of iterations of a loop the brute-force algorithm
for analysis of the number of iterations of the loop tries to evaluate.
@paindex hot-bb-count-fraction
@item hot-bb-count-fraction
The denominator @var{n} of fraction 1/@var{n}
of the maximal execution count of a
basic block in the entire program that a basic block needs to at least
have in order to be considered hot. The default is 10000, which means
that a basic block is considered hot if its execution count is greater
than 1/10000 of the maximal execution count. 0 means that it is never
considered hot. Used in non-LTO mode.
@paindex hot-bb-count-ws-permille
@item hot-bb-count-ws-permille
The number of most executed permilles, ranging from 0 to 1000, of the
profiled execution of the entire program to which the execution count
of a basic block must be part of in order to be considered hot. The
default is 990, which means that a basic block is considered hot if
its execution count contributes to the upper 990 permilles, or 99.0%,
of the profiled execution of the entire program. 0 means that it is
never considered hot. Used in LTO mode.
@paindex hot-bb-frequency-fraction
@item hot-bb-frequency-fraction
The denominator @var{n} of fraction 1/@var{n}
of the execution frequency of the
entry block of a function that a basic block of this function needs
to at least have in order to be considered hot. The default is 1000,
which means that a basic block is considered hot in a function if it
is executed more frequently than 1/1000 of the frequency of the entry
block of the function. 0 means that it is never considered hot.
@paindex unlikely-bb-count-fraction
@item unlikely-bb-count-fraction
The denominator @var{n} of fraction 1/@var{n}
of the number of profiled runs of
the entire program below which the execution count of a basic block
must be in order for the basic block to be considered unlikely executed.
The default is 20, which means that a basic block is considered unlikely
executed if it is executed in fewer than 1/20, or 5%, of the runs of
the program. 0 means that it is always considered unlikely executed.
@paindex max-predicted-iterations
@item max-predicted-iterations
The maximum number of loop iterations we predict statically. This is useful
in cases where a function contains a single loop with known bound and
another loop with unknown bound.
The known number of iterations is predicted correctly, while
the unknown number of iterations average to roughly 10. This means that the
loop without bounds appears artificially cold relative to the other one.
@paindex builtin-expect-probability
@item builtin-expect-probability
Control the probability of the expression having the specified value, as
a percentage.
@paindex builtin-string-cmp-inline-length
@item builtin-string-cmp-inline-length
The maximum length of a constant string for a builtin @code{strcmp} or
@code{memcmp} call to be eligible for inlining.
@paindex align-threshold
@item align-threshold
Select fraction of the maximal frequency of executions of a basic block in
a function to align the basic block.
@paindex align-loop-iterations
@item align-loop-iterations
A loop expected to iterate at least the selected number of iterations is
aligned.
@paindex tracer-dynamic-coverage
@paindex tracer-dynamic-coverage-feedback
@item tracer-dynamic-coverage
@itemx tracer-dynamic-coverage-feedback
This value is used to limit superblock formation once the given percentage of
executed instructions is covered. This limits unnecessary code size
expansion.
The @option{tracer-dynamic-coverage-feedback} parameter
is used only when profile
feedback is available. The real profiles (as opposed to statically estimated
ones) are much less balanced allowing the threshold to be a larger value.
@paindex tracer-max-code-growth
@item tracer-max-code-growth
Stop tail duplication once code growth has reached given percentage. This is
a rather artificial limit, as most of the duplicates are eliminated later in
cross jumping, so it may be set to much higher values than is the desired code
growth.
@paindex tracer-min-branch-ratio
@item tracer-min-branch-ratio
Stop reverse growth when the reverse probability of best edge is less than this
threshold (as a percentage).
@paindex tracer-min-branch-probability
@paindex tracer-min-branch-probability-feedback
@item tracer-min-branch-probability
@itemx tracer-min-branch-probability-feedback
Stop forward growth if the best edge has probability lower than this
threshold.
Similarly to @option{tracer-dynamic-coverage} two parameters are
provided. @option{tracer-min-branch-probability-feedback} is used for
compilation with profile feedback and @option{tracer-min-branch-probability}
compilation without. The value for compilation with profile feedback
needs to be more conservative (higher) in order to make tracer
effective.
@paindex stack-clash-protection-guard-size
@item stack-clash-protection-guard-size
Specify the size of the operating system provided stack guard as
2 raised to @var{num} bytes. Higher values may reduce the
number of explicit probes, but a value larger than the guard provided
by the operating system leaves code vulnerable to stack clash style attacks.
@paindex stack-clash-protection-probe-interval
@item stack-clash-protection-probe-interval
Stack clash protection involves probing stack space as it is allocated.
This parameter controls the maximum distance between probes into the stack
as 2 raised to @var{num} bytes.
Higher values may reduce the number of explicit probes,
but a value larger than the guard provided by the operating system leaves
code vulnerable to stack clash style attacks.
@paindex max-cse-path-length
@item max-cse-path-length
The maximum number of basic blocks on path that CSE considers.
@paindex max-cse-insns
@item max-cse-insns
The maximum number of instructions CSE processes before flushing.
@paindex ggc-min-expand
@item ggc-min-expand
GCC uses a garbage collector to manage its own memory allocation. This
parameter specifies the minimum percentage by which the garbage
collector's heap should be allowed to expand between collections.
Tuning this may improve compilation speed; it has no effect on code
generation.
The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when
RAM >= 1GB@. If @code{getrlimit} is available, the notion of ``RAM'' is
the smallest of actual RAM and @code{RLIMIT_DATA} or @code{RLIMIT_AS}. If
GCC is not able to calculate RAM on a particular platform, the lower
bound of 30% is used. Setting this parameter and
@option{ggc-min-heapsize} to zero causes a full collection to occur at
every opportunity. This is extremely slow, but can be useful for
debugging.
@paindex ggc-min-heapsize
@item ggc-min-heapsize
Minimum size of the garbage collector's heap before it begins bothering
to collect garbage. The first collection occurs after the heap expands
by @option{ggc-min-expand}% beyond @option{ggc-min-heapsize}. Again,
tuning this may improve compilation speed, and has no effect on code
generation.
The default is the smaller of RAM/8, @code{RLIMIT_RSS}, or a limit that
tries to ensure that @code{RLIMIT_DATA} or @code{RLIMIT_AS} are not exceeded,
but with a lower bound of 4096 (four megabytes) and an upper bound of
131072 (128 megabytes). If GCC is not able to calculate RAM on a
particular platform, the lower bound is used. Setting this parameter
very large effectively disables garbage collection. Setting this
parameter and @option{ggc-min-expand} to zero causes a full collection
to occur at every opportunity.
@paindex max-reload-search-insns
@item max-reload-search-insns
The maximum number of instruction reload should look backward for equivalent
register. Increasing values mean more aggressive optimization, making the
compilation time increase with probably slightly better performance.
@paindex max-cselib-memory-locations
@item max-cselib-memory-locations
The maximum number of memory locations cselib should take into account.
Increasing values mean more aggressive optimization, making the compilation
time increase with probably slightly better performance.
@paindex max-sched-ready-insns
@item max-sched-ready-insns
The maximum number of instructions ready to be issued the scheduler should
consider at any given time during the first scheduling pass. Increasing
values mean more thorough searches, making the compilation time increase
with probably little benefit.
@paindex max-sched-region-blocks
@item max-sched-region-blocks
The maximum number of blocks in a region to be considered for
interblock scheduling.
@paindex max-pipeline-region-blocks
@item max-pipeline-region-blocks
The maximum number of blocks in a region to be considered for
pipelining in the selective scheduler.
@paindex max-sched-region-insns
@item max-sched-region-insns
The maximum number of insns in a region to be considered for
interblock scheduling.
@paindex max-pipeline-region-insns
@item max-pipeline-region-insns
The maximum number of insns in a region to be considered for
pipelining in the selective scheduler.
@paindex min-spec-prob
@item min-spec-prob
The minimum probability (as a percentage) of reaching a source block
for interblock speculative scheduling.
@paindex max-sched-extend-regions-iters
@item max-sched-extend-regions-iters
The maximum number of iterations through CFG to extend regions.
A value of 0 disables region extensions.
@paindex max-sched-insn-conflict-delay
@item max-sched-insn-conflict-delay
The maximum conflict delay for an insn to be considered for speculative motion.
@paindex sched-spec-prob-cutoff
@item sched-spec-prob-cutoff
The minimal probability of speculation success (as a percentage), so that
speculative insns are scheduled.
@paindex sched-state-edge-prob-cutoff
@item sched-state-edge-prob-cutoff
The minimum probability an edge must have for the scheduler to save its
state across it.
@paindex sched-mem-true-dep-cost
@item sched-mem-true-dep-cost
Minimal distance (in CPU cycles) between store and load targeting same
memory locations.
@paindex selsched-max-lookahead
@item selsched-max-lookahead
The maximum size of the lookahead window of selective scheduling. It is a
depth of search for available instructions.
@paindex selsched-max-sched-times
@item selsched-max-sched-times
The maximum number of times that an instruction is scheduled during
selective scheduling. This is the limit on the number of iterations
through which the instruction may be pipelined.
@paindex selsched-insns-to-rename
@item selsched-insns-to-rename
The maximum number of best instructions in the ready list that are considered
for renaming in the selective scheduler.
@paindex sms-min-sc
@item sms-min-sc
The minimum value of stage count that swing modulo scheduler
generates.
@paindex max-last-value-rtl
@item max-last-value-rtl
The maximum size, measured as number of RTLs, that can be recorded in an
expression in the combiner for a pseudo-register as the last known value of
that register.
@paindex max-combine-insns
@item max-combine-insns
The maximum number of instructions the RTL combiner tries to combine.
@paindex max-combine-search-insns
@item max-combine-search-insns
The maximum number of instructions that the RTL combiner searches in order
to find the next use of a given register definition. If this limit is reached
without finding such a use, the combiner stops trying to optimize the
definition.
Currently this limit only applies after certain successful combination
attempts, but it could be extended to other cases in future.
@paindex integer-share-limit
@item integer-share-limit
Small integer constants can use a shared data structure, reducing the
compiler's memory usage and increasing its speed. This sets the maximum
value of a shared integer constant.
@paindex ssp-buffer-size
@item ssp-buffer-size
The minimum size of buffers (i.e.@: arrays) that receive stack smashing
protection when @option{-fstack-protector} is used.
@paindex min-size-for-stack-sharing
@item min-size-for-stack-sharing
The minimum size of variables taking part in stack slot sharing when not
optimizing.
@paindex max-jump-thread-duplication-stmts
@item max-jump-thread-duplication-stmts
Maximum number of statements allowed in a block that needs to be
duplicated when threading jumps.
@paindex max-jump-thread-paths
@item max-jump-thread-paths
The maximum number of paths to consider when searching for jump threading
opportunities. When arriving at a block, incoming edges are only considered
if the number of paths to be searched so far multiplied by the number of
incoming edges does not exhaust the specified maximum number of paths to
consider.
@paindex max-fields-for-field-sensitive
@item max-fields-for-field-sensitive
Maximum number of fields in a structure treated in
a field-sensitive manner during pointer analysis.
@paindex prefetch-latency
@item prefetch-latency
Estimate on average number of instructions that are executed before
prefetch finishes. The distance prefetched ahead is proportional
to this constant. Increasing this number may also lead to less
streams being prefetched (see @option{simultaneous-prefetches}).
@paindex simultaneous-prefetches
@item simultaneous-prefetches
Maximum number of prefetches that can run at the same time.
@paindex l1-cache-line-size
@item l1-cache-line-size
The size of cache line in L1 data cache, in bytes.
@paindex l1-cache-size
@item l1-cache-size
The size of L1 data cache, in kilobytes.
@paindex l2-cache-size
@item l2-cache-size
The size of L2 data cache, in kilobytes.
@paindex prefetch-dynamic-strides
@item prefetch-dynamic-strides
Whether the loop array prefetch pass should issue software prefetch hints
for strides that are non-constant. In some cases this may be
beneficial, though the fact the stride is non-constant may make it
hard to predict when there is clear benefit to issuing these hints.
Set to 1 if the prefetch hints should be issued for non-constant
strides. Set to 0 if prefetch hints should be issued only for strides that
are known to be constant and below @option{prefetch-minimum-stride}.
@paindex prefetch-minimum-stride
@item prefetch-minimum-stride
Minimum constant stride, in bytes, to start using prefetch hints for. If
the stride is less than this threshold, prefetch hints are not issued.
This setting is useful for processors that have hardware prefetchers, in
which case there may be conflicts between the hardware prefetchers and
the software prefetchers. If the hardware prefetchers have a maximum
stride they can handle, it should be used here to improve the use of
software prefetchers.
A value of -1 means we don't have a threshold and therefore
prefetch hints can be issued for any constant stride.
This setting is only useful for strides that are known and constant.
@paindex destructive-interference-size
@item destructive-interference-size
@paindex constructive-interference-size
@item constructive-interference-size
The values for the C++17 variables
@code{std::hardware_destructive_interference_size} and
@code{std::hardware_constructive_interference_size}. The destructive
interference size is the minimum recommended offset between two
independent concurrently-accessed objects; the constructive
interference size is the maximum recommended size of contiguous memory
accessed together. Typically both are the size of an L1 cache
line for the target, in bytes. For a generic target covering a range of L1
cache line sizes, typically the constructive interference size is
the small end of the range and the destructive size is the large
end.
The destructive interference size is intended to be used for layout,
and thus has ABI impact. The default value is not expected to be
stable, and on some targets varies with @option{-mtune}, so use of
this variable in a context where ABI stability is important, such as
the public interface of a library, is strongly discouraged; if it is
used in that context, users can stabilize the value using this
option.
The constructive interference size is less sensitive, as it is
typically only used in a @samp{static_assert} to make sure that a type
fits within a cache line.
See also @option{-Winterference-size}.
@paindex loop-interchange-max-num-stmts
@item loop-interchange-max-num-stmts
The maximum number of stmts in a loop to be interchanged.
@paindex loop-interchange-stride-ratio
@item loop-interchange-stride-ratio
The minimum ratio between stride of two loops for interchange to be profitable.
@paindex min-insn-to-prefetch-ratio
@item min-insn-to-prefetch-ratio
The minimum ratio between the number of instructions and the
number of prefetches to enable prefetching in a loop.
@paindex prefetch-min-insn-to-mem-ratio
@item prefetch-min-insn-to-mem-ratio
The minimum ratio between the number of instructions and the
number of memory references to enable prefetching in a loop.
@paindex use-canonical-types
@item use-canonical-types
Whether the compiler should use the ``canonical'' type system.
Should always be 1, which uses a more efficient internal
mechanism for comparing types in C++ and Objective-C++. However, if
bugs in the canonical type system are causing compilation failures,
set this value to 0 to disable canonical types.
@paindex switch-conversion-max-branch-ratio
@item switch-conversion-max-branch-ratio
Switch initialization conversion refuses to create arrays that are
bigger than @option{switch-conversion-max-branch-ratio} times the number of
branches in the switch.
@paindex max-partial-antic-length
@item max-partial-antic-length
Maximum length of the partial antic set computed during the tree
partial redundancy elimination optimization (@option{-ftree-pre}) when
optimizing at @option{-O3} and above. For some sorts of source code
the enhanced partial redundancy elimination optimization can run away,
consuming all of the memory available on the host machine. This
parameter sets a limit on the length of the sets that are computed,
which prevents the runaway behavior. Setting a value of 0 for
this parameter allows an unlimited set length.
@paindex rpo-vn-max-loop-depth
@item rpo-vn-max-loop-depth
Maximum loop depth that is value-numbered optimistically.
When the limit hits the innermost
@var{rpo-vn-max-loop-depth} loops and the outermost loop in the
loop nest are value-numbered optimistically and the remaining ones not.
@paindex sccvn-max-alias-queries-per-access
@item sccvn-max-alias-queries-per-access
Maximum number of alias-oracle queries we perform when looking for
redundancies for loads and stores. If this limit is hit the search
is aborted and the load or store is not considered redundant. The
number of queries is algorithmically limited to the number of
stores on all paths from the load to the function entry.
@paindex ira-max-loops-num
@item ira-max-loops-num
IRA uses regional register allocation by default. If a function
contains more loops than the number given by this parameter, only at most
the given number of the most frequently-executed loops form regions
for regional register allocation.
@paindex ira-max-conflict-table-size
@item ira-max-conflict-table-size
Although IRA uses a sophisticated algorithm to compress the conflict
table, the table can still require excessive amounts of memory for
huge functions. If the conflict table for a function could be more
than the size in MB given by this parameter, the register allocator
instead uses a faster, simpler, and lower-quality
algorithm that does not require building a pseudo-register conflict table.
@paindex ira-loop-reserved-regs
@item ira-loop-reserved-regs
IRA can be used to evaluate more accurate register pressure in loops
for decisions to move loop invariants (see @option{-O3}). The number
of available registers reserved for some other purposes is given
by this parameter. The default value of the parameter
is the best found from numerous experiments.
@paindex ira-consider-dup-in-all-alts
@item ira-consider-dup-in-all-alts
Make IRA to consider matching constraint (duplicated operand number)
heavily in all available alternatives for preferred register class.
If it is set as zero, it means IRA only respects the matching
constraint when it's in the only available alternative with an
appropriate register class. Otherwise, it means IRA checks all
available alternatives for preferred register class even if it has
found some choice with an appropriate register class that satisfies the
found qualified matching constraint.
@paindex ira-simple-lra-insn-threshold
@item ira-simple-lra-insn-threshold
Approximate function insn number in 1K units triggering simple local RA.
@paindex lra-inheritance-ebb-probability-cutoff
@item lra-inheritance-ebb-probability-cutoff
LRA tries to reuse values reloaded in registers in subsequent insns.
This optimization is called inheritance. EBB is used as a region to
do this optimization. The parameter defines a minimal fall-through
edge probability (as a percentage) used to add BB to inheritance EBB in
LRA. The default value was chosen
from numerous runs of SPEC2000 on x86-64.
@paindex loop-invariant-max-bbs-in-loop
@item loop-invariant-max-bbs-in-loop
Loop invariant motion can be very expensive, both in compilation time and
in amount of needed compile-time memory, with very large loops. Loops
with more basic blocks than this parameter won't have loop invariant
motion optimization performed on them.
@paindex loop-max-datarefs-for-datadeps
@item loop-max-datarefs-for-datadeps
Building data dependencies is expensive for very large loops. This
parameter limits the number of data references in loops that are
considered for data dependence analysis. These large loops are no
handled by the optimizations using loop data dependencies.
@paindex max-vartrack-size
@item max-vartrack-size
Sets a maximum number of hash table slots to use during variable
tracking dataflow analysis of any function. If this limit is exceeded
with variable tracking at assignments enabled, analysis for that
function is retried without it, after removing all debug insns from
the function. If the limit is exceeded even without debug insns, var
tracking analysis is completely disabled for the function. Setting
the parameter to zero makes it unlimited.
@paindex max-vartrack-expr-depth
@item max-vartrack-expr-depth
Sets a maximum number of recursion levels when attempting to map
variable names or debug temporaries to value expressions. This trades
compilation time for more complete debug information. If this is set too
low, value expressions that are available and could be represented in
debug information may end up not being used; setting this higher may
enable the compiler to find more complex debug expressions, but compile
time and memory use may grow.
@paindex max-debug-marker-count
@item max-debug-marker-count
Sets a threshold on the number of debug markers (e.g.@: begin stmt
markers) to avoid complexity explosion at inlining or expanding to RTL.
If a function has more such gimple stmts than the set limit, such stmts
are dropped from the inlined copy of a function and from its RTL
expansion.
@paindex min-nondebug-insn-uid
@item min-nondebug-insn-uid
Use uids starting at this parameter for nondebug insns. The range below
the parameter is reserved exclusively for debug insns created by
@option{-fvar-tracking-assignments}, but debug insns may get
(non-overlapping) uids above it if the reserved range is exhausted.
@paindex ipa-sra-deref-prob-threshold
@item ipa-sra-deref-prob-threshold
IPA-SRA replaces a pointer that is known not be NULL with one or more
new parameters only when the probability (as a percentage, relative to
function entry) of it being dereferenced is higher than this parameter.
@paindex ipa-sra-ptr-growth-factor
@item ipa-sra-ptr-growth-factor
IPA-SRA replaces a pointer to an aggregate with one or more new
parameters only when their cumulative size is less or equal to
@option{ipa-sra-ptr-growth-factor} times the size of the original
pointer parameter.
@paindex ipa-sra-ptrwrap-growth-factor
@item ipa-sra-ptrwrap-growth-factor
Additional maximum allowed growth of total size of new parameters
that ipa-sra replaces a pointer to an aggregate with,
if it points to a local variable that the caller only writes to and
passes it as an argument to other functions.
@paindex ipa-sra-max-replacements
@item ipa-sra-max-replacements
Maximum pieces of an aggregate that IPA-SRA tracks. As a
consequence, it is also the maximum number of replacements of a formal
parameter.
@paindex sra-max-scalarization-size-Ospeed
@paindex sra-max-scalarization-size-Osize
@item sra-max-scalarization-size-Ospeed
@itemx sra-max-scalarization-size-Osize
The two Scalar Reduction of Aggregates passes (SRA and IPA-SRA) aim to
replace scalar parts of aggregates with uses of independent scalar
variables. These parameters control the maximum size, in storage units,
of aggregates that are considered for replacement when compiling for
speed
(@option{sra-max-scalarization-size-Ospeed}) or size
(@option{sra-max-scalarization-size-Osize}) respectively.
@paindex sra-max-propagations
@item sra-max-propagations
The maximum number of artificial accesses that Scalar Replacement of
Aggregates (SRA) tracks, per one local variable, in order to
facilitate copy propagation.
@paindex tm-max-aggregate-size
@item tm-max-aggregate-size
When making copies of thread-local variables in a transaction, this
parameter specifies the size in bytes after which variables are
saved with the logging functions as opposed to save/restore code
sequence pairs. This option only applies when using
@option{-fgnu-tm}.
@paindex graphite-max-nb-scop-params
@item graphite-max-nb-scop-params
To avoid exponential effects in the Graphite loop transforms, the
number of parameters in a Static Control Part (SCoP) is bounded.
A value of zero can be used to lift
the bound. A variable whose value is unknown at compilation time and
defined outside a SCoP is a parameter of the SCoP.
@paindex hardcfr-max-blocks
@item hardcfr-max-blocks
Disable @option{-fharden-control-flow-redundancy} for functions with a
larger number of blocks than the specified value. Zero removes any
limit.
@paindex hardcfr-max-inline-blocks
@item hardcfr-max-inline-blocks
Force @option{-fharden-control-flow-redundancy} to use out-of-line
checking for functions with a larger number of basic blocks than the
specified value.
@paindex loop-block-tile-size
@item loop-block-tile-size
Loop blocking or strip mining transforms, enabled with
@option{-floop-block} or @option{-floop-strip-mine}, strip mine each
loop in the loop nest by a given number of iterations. The strip
length can be changed using the @option{loop-block-tile-size}
parameter.
@paindex ipa-jump-function-lookups
@item ipa-jump-function-lookups
Specifies number of statements visited during jump function offset discovery.
@paindex ipa-cp-value-list-size
@item ipa-cp-value-list-size
IPA-CP attempts to track all possible values and types passed to a function's
parameter in order to propagate them and perform devirtualization.
@option{ipa-cp-value-list-size} is the maximum number of values and types it
stores per one formal parameter of a function.
@paindex ipa-cp-eval-threshold
@item ipa-cp-eval-threshold
IPA-CP calculates its own score of cloning profitability heuristics
and performs those cloning opportunities with scores that exceed
@option{ipa-cp-eval-threshold}.
@paindex ipa-cp-max-recursive-depth
@item ipa-cp-max-recursive-depth
Maximum depth of recursive cloning for self-recursive function.
@paindex ipa-cp-min-recursive-probability
@item ipa-cp-min-recursive-probability
Recursive cloning only when the probability of call being executed exceeds
the parameter.
@paindex ipa-cp-recursive-freq-factor
@item ipa-cp-recursive-freq-factor
The number of times interprocedural copy propagation expects recursive
functions to call themselves.
@paindex ipa-cp-recursion-penalty
@item ipa-cp-recursion-penalty
Percentage penalty the recursive functions receive when they
are evaluated for cloning.
@paindex ipa-cp-single-call-penalty
@item ipa-cp-single-call-penalty
Percentage penalty functions containing a single call to another
function receive when they are evaluated for cloning.
@paindex ipa-cp-sweeps
@item ipa-cp-sweeps
The number of times the interprocedural constant propagation traverses
all functions to make cloning decisions.
@paindex ipa-max-agg-items
@item ipa-max-agg-items
IPA-CP is also capable of propagating a number of scalar values passed
in an aggregate. @option{ipa-max-agg-items} controls the maximum
number of such values per one parameter.
@paindex ipa-cp-loop-hint-bonus
@item ipa-cp-loop-hint-bonus
When IPA-CP determines that a cloning candidate would make the number
of iterations of a loop known, it adds a bonus of
@option{ipa-cp-loop-hint-bonus} to the profitability score of
the candidate.
@paindex ipa-max-loop-predicates
@item ipa-max-loop-predicates
The maximum number of different predicates IPA uses to describe when
loops in a function have known properties.
@paindex ipa-max-aa-steps
@item ipa-max-aa-steps
During its analysis of function bodies, IPA-CP employs alias analysis
in order to track values pointed to by function parameters. In order
not spend too much time analyzing huge functions, it gives up and
consider all memory clobbered after examining
@option{ipa-max-aa-steps} statements modifying memory.
@paindex ipa-max-switch-predicate-bounds
@item ipa-max-switch-predicate-bounds
Maximal number of boundary endpoints of case ranges of a switch statement.
For switch exceeding this limit, IPA-CP does not construct a cloning cost
predicate, which is used to estimate cloning benefit, for the default case
of the switch statement.
@paindex ipa-max-param-expr-ops
@item ipa-max-param-expr-ops
IPA-CP analyzes conditional statements that reference some function
parameter to estimate benefit for cloning upon certain constant value.
But if number of operations in a parameter expression exceeds
@option{ipa-max-param-expr-ops}, the expression is treated as complicated,
and is not handled by IPA analysis.
@paindex lto-partitions
@item lto-partitions
Specify desired number of partitions produced during WHOPR compilation.
The number of partitions should exceed the number of CPUs used for compilation.
@paindex lto-min-partition
@item lto-min-partition
Minimum partition size for WHOPR (in estimated instructions).
This prevents expenses of splitting very small programs into too many
partitions.
@paindex lto-max-partition
@item lto-max-partition
Maximum partition size for WHOPR (in estimated instructions).
to provide an upper bound for individual size of partition.
Meant to be used only with balanced partitioning.
@paindex lto-partition-locality-frequency-cutoff
@item lto-partition-locality-frequency-cutoff
The denominator @var{n} of fraction 1/@var{n} of the execution frequency of
the callee to be cloned for a particular caller.
The special value of 0 dictates to always clone
without a cut-off.
@paindex lto-partition-locality-size-cutoff
@item lto-partition-locality-size-cutoff
Size cut-off for callee including inlined calls to be cloned for a particular
caller.
@paindex lto-max-locality-partition
@item lto-max-locality-partition
Maximal size of a locality partition for LTO (in estimated instructions).
Value of 0 results in default value being used.
@paindex lto-max-streaming-parallelism
@item lto-max-streaming-parallelism
Maximal number of parallel processes used for LTO streaming.
@paindex cxx-max-namespaces-for-diagnostic-help
@item cxx-max-namespaces-for-diagnostic-help
The maximum number of namespaces to consult for suggestions when C++
name lookup fails for an identifier.
@paindex sink-frequency-threshold
@item sink-frequency-threshold
The maximum relative execution frequency (as a percentage) of the target block
relative to a statement's original block to allow statement sinking of a
statement. Larger numbers result in more aggressive statement sinking.
A small positive adjustment is applied for
statements with memory operands as those are even more profitable to sink.
@paindex max-stores-to-sink
@item max-stores-to-sink
The maximum number of conditional store pairs that can be sunk. Set to 0
if either vectorization (@option{-ftree-vectorize}) or if-conversion
(@option{-ftree-loop-if-convert}) is disabled.
@paindex case-values-threshold
@item case-values-threshold
The smallest number of different values for which it is best to use a
jump table instead of a tree of conditional branches. If the value is
0, use the default for the machine.
@paindex jump-table-max-growth-ratio-for-size
@item jump-table-max-growth-ratio-for-size
The maximum code size growth ratio when expanding
into a jump table (as a percentage). The parameter is used when
optimizing for size.
@paindex jump-table-max-growth-ratio-for-speed
@item jump-table-max-growth-ratio-for-speed
The maximum code size growth ratio when expanding
into a jump table (as a percentage). The parameter is used when
optimizing for speed.
@paindex tree-reassoc-width
@item tree-reassoc-width
In the tree reassociation pass, set the maximum number of instructions
executed in parallel in the reassociated tree.
This parameter overrides target-dependent
heuristics used by default if it has a nonzero value.
@paindex sched-pressure-algorithm
@item sched-pressure-algorithm
Choose between the two available implementations of
@option{-fsched-pressure}. Algorithm 1 is the original implementation
and is the more likely to prevent instructions from being reordered.
Algorithm 2 was designed to be a compromise between the relatively
conservative approach taken by algorithm 1 and the rather aggressive
approach taken by the default scheduler. It relies more heavily on
having a regular register file and accurate register pressure classes.
See @file{haifa-sched.cc} in the GCC sources for more details.
The default choice depends on the target.
@paindex max-slsr-cand-scan
@item max-slsr-cand-scan
Set the maximum number of existing candidates that are considered when
seeking a basis for a new straight-line strength reduction candidate.
@paindex asan-globals
@item asan-globals
Enable buffer overflow detection for global objects. This kind
of protection is enabled by default if you are using
@option{-fsanitize=address} option.
To disable global objects protection use @option{--param asan-globals=0}.
@paindex asan-stack
@item asan-stack
Enable buffer overflow detection for stack objects. This kind of
protection is enabled by default when using @option{-fsanitize=address}.
To disable stack protection use @option{--param asan-stack=0} option.
@paindex asan-instrument-reads
@item asan-instrument-reads
Enable buffer overflow detection for memory reads. This kind of
protection is enabled by default when using @option{-fsanitize=address}.
To disable memory reads protection use
@option{--param asan-instrument-reads=0}.
@paindex asan-instrument-writes
@item asan-instrument-writes
Enable buffer overflow detection for memory writes. This kind of
protection is enabled by default when using @option{-fsanitize=address}.
To disable memory writes protection use
@option{--param asan-instrument-writes=0} option.
@paindex asan-memintrin
@item asan-memintrin
Enable detection for built-in functions. This kind of protection
is enabled by default when using @option{-fsanitize=address}.
To disable built-in functions protection use
@option{--param asan-memintrin=0}.
@paindex asan-use-after-return
@item asan-use-after-return
Enable detection of use-after-return. This kind of protection
is enabled by default when using the @option{-fsanitize=address} option.
To disable it use @option{--param asan-use-after-return=0}.
Note: By default the check is disabled at run time. To enable it,
add @code{detect_stack_use_after_return=1} to the environment variable
@env{ASAN_OPTIONS}.
@paindex asan-instrumentation-with-call-threshold
@item asan-instrumentation-with-call-threshold
If number of memory accesses in function being instrumented
is greater or equal to this number, use callbacks instead of inline checks.
E.g. to disable inline code use
@option{--param asan-instrumentation-with-call-threshold=0}.
@paindex asan-kernel-mem-intrinsic-prefix
@item asan-kernel-mem-intrinsic-prefix
If nonzero, prefix calls to @code{memcpy}, @code{memset} and @code{memmove}
with @samp{__asan_} or @samp{__hwasan_}
for @option{-fsanitize=kernel-address} or @samp{-fsanitize=kernel-hwaddress},
respectively.
@paindex hwasan-instrument-stack
@item hwasan-instrument-stack
Enable hwasan instrumentation of statically-sized stack-allocated variables.
This kind of instrumentation is enabled by default when using
@option{-fsanitize=hwaddress} and disabled by default when using
@option{-fsanitize=kernel-hwaddress}.
To disable stack instrumentation use
@option{--param hwasan-instrument-stack=0}, and to enable it use
@option{--param hwasan-instrument-stack=1}.
@paindex hwasan-random-frame-tag
@item hwasan-random-frame-tag
When using stack instrumentation, decide tags for stack variables using a
deterministic sequence beginning at a random tag for each frame. With this
parameter unset tags are chosen using the same sequence but beginning from 1.
This is enabled by default for @option{-fsanitize=hwaddress} and unavailable
for @option{-fsanitize=kernel-hwaddress} and @option{-fsanitize=memtag-stack}.
To disable it use @option{--param hwasan-random-frame-tag=0}.
@paindex hwasan-instrument-allocas
@item hwasan-instrument-allocas
Enable hwasan instrumentation of dynamically sized stack-allocated variables.
This kind of instrumentation is enabled by default when using
@option{-fsanitize=hwaddress} and disabled by default when using
@option{-fsanitize=kernel-hwaddress}.
To disable instrumentation of such variables use
@option{--param hwasan-instrument-allocas=0}, and to enable it use
@option{--param hwasan-instrument-allocas=1}.
@paindex hwasan-instrument-reads
@item hwasan-instrument-reads
Enable hwasan checks on memory reads. Instrumentation of reads is enabled by
default for both @option{-fsanitize=hwaddress} and
@option{-fsanitize=kernel-hwaddress}.
To disable checking memory reads use
@option{--param hwasan-instrument-reads=0}.
@paindex hwasan-instrument-writes
@item hwasan-instrument-writes
Enable hwasan checks on memory writes. Instrumentation of writes is enabled by
default for both @option{-fsanitize=hwaddress} and
@option{-fsanitize=kernel-hwaddress}.
To disable checking memory writes use
@option{--param hwasan-instrument-writes=0}.
@paindex hwasan-instrument-mem-intrinsics
@item hwasan-instrument-mem-intrinsics
Enable hwasan instrumentation of builtin functions. Instrumentation of these
builtin functions is enabled by default for both @option{-fsanitize=hwaddress}
and @option{-fsanitize=kernel-hwaddress}.
To disable instrumentation of builtin functions use
@option{--param hwasan-instrument-mem-intrinsics=0}.
@paindex memtag-instrument-allocas
@item memtag-instrument-allocas
Enable hardware-assisted memory tagging of dynamically sized stack-allocated
variables. This kind of code generation is enabled by default when using
@option{-fsanitize=memtag-stack}.
@paindex memtag-instrument-mem-intrinsics
@item memtag-instrument-mem-intrinsics
When sanitizing using MTE instructions, include builtin functions.
@paindex use-after-scope-direct-emission-threshold
@item use-after-scope-direct-emission-threshold
If the size of a local variable in bytes is smaller or equal to this
number, directly poison (or unpoison) shadow memory instead of using
run-time callbacks.
@paindex tsan-distinguish-volatile
@item tsan-distinguish-volatile
Emit special instrumentation for accesses to volatiles.
@paindex tsan-instrument-func-entry-exit
@item tsan-instrument-func-entry-exit
Emit instrumentation calls to @code{__tsan_func_entry()} and
@code{ __tsan_func_exit()}.
@paindex max-fsm-thread-path-insns
@item max-fsm-thread-path-insns
Maximum number of instructions to copy when duplicating blocks on a
finite state automaton jump thread path.
@paindex threader-debug
@item threader-debug
Enables verbose dumping of the threader solver. This parameter has two
special values, @samp{none} and @samp{all}.
@paindex parloops-chunk-size
@item parloops-chunk-size
Chunk size of OpenMP schedule for loops parallelized by parloops.
@paindex parloops-schedule
@item parloops-schedule
Schedule type of OpenMP schedule for loops parallelized by parloops (static,
dynamic, guided, auto, runtime).
@paindex parloops-min-per-thread
@item parloops-min-per-thread
The minimum number of iterations per thread of an innermost parallelized
loop for which the parallelized variant is preferred over the single threaded
one. Note that for a parallelized loop nest the
minimum number of iterations of the outermost loop per thread is two.
@paindex max-ssa-name-query-depth
@item max-ssa-name-query-depth
Maximum depth of recursion when querying properties of SSA names in things
like fold routines. One level of recursion corresponds to following a
use-def chain.
@paindex max-speculative-devirt-maydefs
@item max-speculative-devirt-maydefs
The maximum number of may-defs we analyze when looking for a must-def
specifying the dynamic type of an object that invokes a virtual call
we may be able to devirtualize speculatively.
@paindex ranger-debug
@item ranger-debug
Specifies the type of debug output to be issued for ranges.
@paindex unroll-jam-min-percent
@item unroll-jam-min-percent
The minimum percentage of memory references that must be optimized
away for the unroll-and-jam transformation to be considered profitable.
@paindex unroll-jam-max-unroll
@item unroll-jam-max-unroll
The maximum number of times the outer loop should be unrolled by
the unroll-and-jam transformation.
@paindex max-rtl-if-conversion-unpredictable-cost
@item max-rtl-if-conversion-unpredictable-cost
Maximum permissible cost for the sequence that would be generated
by the RTL if-conversion pass for a branch that is considered unpredictable.
@paindex max-variable-expansions-in-unroller
@item max-variable-expansions-in-unroller
If @option{-fvariable-expansion-in-unroller} is used, the maximum number
of times that an individual variable is expanded during loop unrolling.
@paindex partial-inlining-entry-probability
@item partial-inlining-entry-probability
Maximum probability of the entry BB of split region
(as a percentage relative to entry BB of the function)
to make partial inlining happen.
@paindex max-tracked-strlens
@item max-tracked-strlens
Maximum number of strings for which the strlen optimization pass
tracks string lengths.
@paindex gcse-after-reload-partial-fraction
@item gcse-after-reload-partial-fraction
The threshold ratio for performing partial redundancy
elimination after reload.
@paindex gcse-after-reload-critical-fraction
@item gcse-after-reload-critical-fraction
The threshold ratio of critical edges execution count that
permit performing redundancy elimination after reload.
@paindex max-loop-header-insns
@item max-loop-header-insns
The maximum number of insns allowed in a loop header duplicated
by the copy loop headers pass.
@paindex vect-epilogues-nomask
@item vect-epilogues-nomask
If nonzero, enable loop epilogue vectorization using smaller vector size.
@paindex vect-partial-vector-usage
@item vect-partial-vector-usage
Controls when the loop vectorizer considers using partial vector loads
and stores as an alternative to falling back to scalar code. 0 stops
the vectorizer from ever using partial vector loads and stores. 1 allows
partial vector loads and stores if vectorization removes the need for the
code to iterate. 2 allows partial vector loads and stores in all loops.
The parameter only has an effect on targets that support partial
vector loads and stores.
@paindex vect-inner-loop-cost-factor
@item vect-inner-loop-cost-factor
The maximum factor that the loop vectorizer applies to the cost of statements
in an inner loop relative to the loop being vectorized. The factor applied
is the maximum of the estimated number of iterations of the inner loop and
this parameter. The default value of this parameter is 50.
@paindex vect-induction-float
@item vect-induction-float
Enable loop vectorization of floating-point inductions.
@paindex vect-scalar-cost-multiplier
@item vect-scalar-cost-multiplier
Apply the given multiplier percentage to scalar loop costing during
vectorization.
Increasing the cost multiplier makes vector loops more profitable.
@paindex vrp-block-limit
@item vrp-block-limit
Maximum number of basic blocks before value range propagation
switches to a simpler algorithm that uses less memory.
@paindex vrp-cstload-limit
@item vrp-cstload-limit
Maximum number of steps when inferring a value range from a load from
a constant aggregate.
@paindex vrp-sparse-threshold
@item vrp-sparse-threshold
Maximum number of basic blocks before value range propagation
uses a sparse bitmap cache.
@paindex vrp-switch-limit
@item vrp-switch-limit
Maximum number of outgoing edges in a switch to allow it to be processed
by value range propagation.
@paindex vrp-vector-threshold
@item vrp-vector-threshold
Maximum number of basic blocks for value range propagation to
use a basic cache vector.
@paindex avoid-fma-max-bits
@item avoid-fma-max-bits
Maximum number of bits for which we avoid creating FMAs.
@paindex fully-pipelined-fma
@item fully-pipelined-fma
Whether the target fully pipelines FMA instructions. If non-zero,
reassociation considers the benefit of parallelizing FMA's multiplication
part and addition part, assuming FMUL and FMA use the same units that can
also do FADD.
@paindex sms-loop-average-count-threshold
@item sms-loop-average-count-threshold
A threshold on the average loop count considered by the swing modulo scheduler.
@paindex sms-dfa-history
@item sms-dfa-history
The number of cycles the swing modulo scheduler considers when checking
conflicts using DFA.
@paindex graphite-allow-codegen-errors
@item graphite-allow-codegen-errors
Whether codegen errors should be ICEs when @option{-fchecking}.
@paindex sms-max-ii-factor
@item sms-max-ii-factor
A factor for tuning the upper bound that the swing modulo scheduler
uses for scheduling a loop.
@paindex lra-max-considered-reload-pseudos
@item lra-max-considered-reload-pseudos
The maximum number of reload pseudos that are considered during
spilling a non-reload pseudo.
@paindex lra-max-pseudos-points-log2-considered-for-preferences
@item lra-max-pseudos-points-log2-considered-for-preferences
The maximum @code{log2(number of reload pseudos * number of
program points)} threshold when preferences for other reload pseudos
are still considered. Taking these preferences into account helps to
improve register allocation. However, for very large functions, a
large value can result in significant compilation time and memory
consumption. The default value is 30.
@paindex max-pow-sqrt-depth
@item max-pow-sqrt-depth
Maximum depth of square root chains to use when synthesizing exponentiation
by a real constant.
@paindex max-dse-active-local-stores
@item max-dse-active-local-stores
Maximum number of active local stores in RTL dead store elimination.
@paindex asan-instrument-allocas
@item asan-instrument-allocas
Enable asan @code{alloca}/VLA protection.
@paindex max-iterations-computation-cost
@item max-iterations-computation-cost
Bound on the cost of an expression to compute the number of iterations
in the doloop optimizer.
@paindex max-isl-operations
@item max-isl-operations
Maximum number of isl operations, 0 means unlimited.
@paindex graphite-max-arrays-per-scop
@item graphite-max-arrays-per-scop
Maximum number of arrays per SCoP.
@paindex max-vartrack-reverse-op-size
@item max-vartrack-reverse-op-size
Maximum size of variable tracking loc list for which reverse ops should
be added.
@paindex fsm-scale-path-stmts
@item fsm-scale-path-stmts
Scale factor to apply to the number of statements in a threading path
crossing a loop back edge when comparing to
@option{--param=max-jump-thread-duplication-stmts}.
@paindex uninit-control-dep-attempts
@item uninit-control-dep-attempts
Maximum number of nested calls to search for control dependencies
during uninitialized variable analysis.
@paindex uninit-max-chain-len
@item uninit-max-chain-len
Maximum number of predicates and-ed for each predicate or-ed in the normalized
predicate chain.
@paindex uninit-max-num-chains
@item uninit-max-num-chains
Maximum number of predicates or-ed in the normalized predicate chain.
@paindex uninit-max-prune-work
@item uninit-max-prune-work
Maximum amount of work done to prune paths where the variable is always
initialized.
@paindex sched-autopref-queue-depth
@item sched-autopref-queue-depth
Hardware autoprefetcher scheduler model control flag.
Number of lookahead cycles the model looks into; a value of 0
only enables the instruction sorting heuristic.
@paindex loop-versioning-max-inner-insns
@item loop-versioning-max-inner-insns
The maximum number of instructions that an inner loop can have
before the loop versioning pass considers it too big to copy.
@paindex loop-versioning-max-outer-insns
@item loop-versioning-max-outer-insns
The maximum number of instructions that an outer loop can have
before the loop versioning pass considers it too big to copy,
discounting any instructions in inner loops that directly benefit
from versioning.
@paindex ssa-name-def-chain-limit
@item ssa-name-def-chain-limit
The maximum number of SSA_NAME assignments to follow in determining
a property of a variable such as its value. This limits the number
of iterations or recursive calls GCC performs when optimizing certain
statements or when determining their validity prior to issuing
diagnostics.
@paindex store-merging-max-size
@item store-merging-max-size
Maximum size of a single store merging region in bytes.
@paindex store-forwarding-max-distance
@item store-forwarding-max-distance
Maximum number of instruction distance that a small store forwarded to a larger
load may stall. A value of 0 disables the cost checks for the
avoid-store-forwarding pass.
@paindex hash-table-verification-limit
@item hash-table-verification-limit
The number of elements for which hash table verification is done
for each searched element.
@paindex max-find-base-term-values
@item max-find-base-term-values
Maximum number of VALUEs handled during a single @code{find_base_term} call.
@paindex analyzer-max-enodes-per-program-point
@item analyzer-max-enodes-per-program-point
The maximum number of exploded nodes per program point within
the analyzer, before terminating analysis of that point.
@paindex analyzer-max-constraints
@item analyzer-max-constraints
The maximum number of constraints per state.
@paindex analyzer-min-snodes-for-call-summary
@item analyzer-min-snodes-for-call-summary
The minimum number of supernodes within a function for the
analyzer to consider summarizing its effects at call sites.
@paindex analyzer-max-enodes-for-full-dump
@item analyzer-max-enodes-for-full-dump
The maximum depth of exploded nodes that should appear in a dot dump
before switching to a less verbose format.
@paindex analyzer-max-recursion-depth
@item analyzer-max-recursion-depth
The maximum number of times a callsite can appear in a call stack
within the analyzer, before terminating analysis of a call that would
recurse deeper.
@paindex analyzer-max-svalue-depth
@item analyzer-max-svalue-depth
The maximum depth of a symbolic value, before approximating
the value as unknown.
@paindex analyzer-max-infeasible-edges
@item analyzer-max-infeasible-edges
The maximum number of infeasible edges to reject before declaring
a diagnostic as infeasible.
@paindex gimple-fe-computed-hot-bb-threshold
@item gimple-fe-computed-hot-bb-threshold
The number of executions of a basic block that is considered hot.
The parameter is used only in GIMPLE FE.
@paindex analyzer-bb-explosion-factor
@item analyzer-bb-explosion-factor
The maximum number of ``after supernode'' exploded nodes within the analyzer
per supernode, before terminating analysis.
@paindex analyzer-text-art-string-ellipsis-threshold
@item analyzer-text-art-string-ellipsis-threshold
The number of bytes at which to ellipsize string literals in analyzer text
art diagrams.
@paindex analyzer-text-art-ideal-canvas-width
@item analyzer-text-art-ideal-canvas-width
The ideal width in characters of text art diagrams generated by the analyzer.
@paindex analyzer-text-art-string-ellipsis-head-len
@item analyzer-text-art-string-ellipsis-head-len
The number of literal bytes to show at the head of a string literal in text
art when ellipsizing it.
@paindex analyzer-text-art-string-ellipsis-tail-len
@item analyzer-text-art-string-ellipsis-tail-len
The number of literal bytes to show at the tail of a string literal in text
art when ellipsizing it.
@paindex ranger-logical-depth
@item ranger-logical-depth
Maximum depth of logical expression evaluation ranger looks through
when evaluating outgoing edge ranges.
@paindex ranger-recompute-depth
@item ranger-recompute-depth
Maximum depth of instruction chains to consider for recomputation
in the outgoing range calculator.
@paindex relation-block-limit
@item relation-block-limit
Maximum number of relations the dominator tree oracle registers in a
basic block during value range relational processing.
@paindex transitive-relations-work-bound
@item transitive-relations-work-bound
Work bound when discovering transitive relations from existing relations
in value range relational processing.
@paindex min-pagesize
@item min-pagesize
Minimum page size for warning and early break vectorization purposes.
@paindex openacc-kernels
@item openacc-kernels
Specify mode of OpenACC @code{kernels} constructs handling.
With @option{--param=openacc-kernels=decompose}, OpenACC @code{kernels}
constructs are decomposed into parts, a sequence of compute
constructs, each then handled individually.
This is work in progress.
With @option{--param=openacc-kernels=parloops}, OpenACC @code{kernels}
constructs are handled by the @samp{parloops} pass, en bloc.
This is the current default.
@paindex openacc-privatization
@item openacc-privatization
Control whether the @option{-fopt-info-omp-note} and applicable
@option{-fdump-tree-*-details} options emit OpenACC privatization diagnostics.
With @option{--param=openacc-privatization=quiet}, don't diagnose.
This is the current default.
With @option{--param=openacc-privatization=noisy}, do diagnose.
@paindex cycle-accurate-model
@item cycle-accurate-model
Specifies whether GCC should assume that the scheduling description is mostly
a cycle-accurate model of the target processor the code is intended to
run on, in the absence of cache misses. Nonzero means that the selected
scheduling model is accurate and likely describes an in-order processor,
and that scheduling should aggressively spill to try and fill any pipeline
bubbles. This is the current default. Zero means the scheduling description
might not be available/accurate or perhaps not applicable at all, such as for
modern out-of-order processors.
@end table
@node Target-Specific Parameters
@section Target-Specific Parameters
@cindex target-specific parameters
Several back ends have their own parameters.
@menu
* AArch64 Parameters::
* AMD GCN Parameters::
* LoongArch Parameters::
* RISC-V Parameters::
* RS/6000 and PowerPC Parameters::
* x86 Parameters::
@end menu
@node AArch64 Parameters
@subsection AArch64 Parameters
@cindex AArch64 parameters
The following choices of @var{name} are available on AArch64 targets:
@table @gcctabopt
@paindex aarch64-vect-compare-costs
@item aarch64-vect-compare-costs
When vectorizing, consider using multiple different approaches and use
the cost model to choose the cheapest one. This includes:
@itemize
@item
Trying both SVE and Advanced SIMD, when SVE is available.
@item
Trying to use 64-bit Advanced SIMD vectors for the smallest data elements,
rather than using 128-bit vectors for everything.
@item
Trying to use ``unpacked'' SVE vectors for smaller elements. This includes
storing smaller elements in larger containers and accessing elements with
extending loads and truncating stores.
@end itemize
@paindex aarch64-float-recp-precision
@item aarch64-float-recp-precision
The number of Newton iterations for calculating the reciprocal for float type.
The precision of division is proportional to this parameter when division
approximation is enabled. The default value is 1.
@paindex aarch64-double-recp-precision
@item aarch64-double-recp-precision
The number of Newton iterations for calculating the reciprocal for double type.
The precision of division is proportional to this parameter when division
approximation is enabled. The default value is 2.
@paindex aarch64-autovec-preference
@item aarch64-autovec-preference
An old alias for @option{-mautovec-preference}. If both
@option{-mautovec-preference} and @option{--param=aarch64-autovec-preference}
are passed, the @option{--param} value is used.
@paindex aarch64-ldp-policy
@item aarch64-ldp-policy
Fine-grained policy for load pair (@code{stp}) instructions.
With @option{--param=aarch64-ldp-policy=default}, use the policy of the
tuning structure. This is the current default.
With @option{--param=aarch64-ldp-policy=always}, emit @code{ldp} regardless
of alignment.
With @option{--param=aarch64-ldp-policy=never}, do not emit @code{ldp}.
With @option{--param=aarch64-ldp-policy=aligned}, emit @code{ldp} only if the
source pointer is aligned to at least double the alignment of the type.
@paindex aarch64-stp-policy
@item aarch64-stp-policy
Fine-grained policy for store pair (@code{stp}) instructions.
With @option{--param=aarch64-stp-policy=default}, use the policy of the
tuning structure. This is the current default.
With @option{--param=aarch64-stp-policy=always}, emit @code{stp} regardless
of alignment.
With @option{--param=aarch64-stp-policy=never}, do not emit @code{stp}.
With @option{--param=aarch64-stp-policy=aligned}, emit @code{stp} only if the
source pointer is aligned to at least double the alignment of the type.
@paindex aarch64-ldp-alias-check-limit
@item aarch64-ldp-alias-check-limit
Limit on the number of alias checks performed by the AArch64 load/store pair
fusion pass when attempting to form an @code{ldp}/@code{stp}.
Higher values make the pass
more aggressive at re-ordering loads over stores, at the expense of increased
compile time.
@paindex aarch64-ldp-writeback
@item aarch64-ldp-writeback
Parameter to control which writeback opportunities the AArch64
load/store pair fusion pass attempts to handle.
A value of zero disables writeback handling. One
means we try to form pairs involving one or more existing individual writeback
accesses where possible. A value of two means we also try to opportunistically
form writeback opportunities by folding in trailing destructive updates of the
base register used by a pair.
@paindex aarch64-loop-vect-issue-rate-niters
@item aarch64-loop-vect-issue-rate-niters
The tuning for some AArch64 CPUs tries to take both latencies and issue
rates into account when deciding whether a loop should be vectorized
using SVE, vectorized using Advanced SIMD, or not vectorized at all.
If this parameter is set to @var{n}, GCC does not use this heuristic
for loops that are known to execute in fewer than @var{n} Advanced
SIMD iterations.
@paindex aarch64-vect-unroll-limit
@item aarch64-vect-unroll-limit
The vectorizer uses available tuning information to determine whether it
would be beneficial to unroll the main vectorized loop and by how much. This
parameter sets the upper bound of how much the vectorizer unrolls the main
loop. The default value is four.
@paindex aarch64-tag-memory-loop-threshold
@item aarch64-tag-memory-loop-threshold
Parameter to control the treshold in number of granules beyond which an
explicit loop for tagging a memory block is emitted. The memory block
is tagged using MTE instructions.
@end table
@node AMD GCN Parameters
@subsection AMD GCN Parameters
@cindex AMD GCN parameters
The following choices of @var{name} are available on GCN targets:
@table @gcctabopt
@paindex gcn-preferred-vectorization-factor
@item gcn-preferred-vectorization-factor
Preferred vectorization factor: @samp{default}, @samp{32}, @samp{64}.
@end table
@node LoongArch Parameters
@subsection LoongArch Parameters
@cindex LoongArch parameters
The following parameters are available on LoongArch targets:
@table @gcctabopt
@paindex loongarch-vect-unroll-limit
@item loongarch-vect-unroll-limit
The vectorizer uses available tuning information to determine whether it
would be beneficial to unroll the main vectorized loop and by how much. This
parameter sets the upper bound of how much the vectorizer unrolls the main
loop. The default value is six.
@end table
@node RISC-V Parameters
@subsection RISC-V Parameters
@cindex RISC-V parameters
The following parameters are available on RISC-V targets:
@table @gcctabopt
@paindex riscv-strcmp-inline-limit
@item riscv-strcmp-inline-limit
The maximum number of bytes compared by the inlined code for @code{strcmp}
and @code{strncmp} when enabled by the @option{-minline-strcmp} and
@option{-minline-strncmp} options, respectively.
The default value is 64.
@end table
@node RS/6000 and PowerPC Parameters
@subsection RS/6000 and PowerPC Parameters
@cindex RS/6000 and PowerPC parameters
The following parameters are available on RS/6000 and PowerPC targets:
@table @gcctabopt
@paindex rs6000-vect-unroll-limit
@item rs6000-vect-unroll-limit
The vectorizer checks with target information to determine whether it
would be beneficial to unroll the main vectorized loop and by how much. This
parameter sets the upper bound of how much the vectorizer unrolls the main
loop. The default value is four.
@end table
@node x86 Parameters
@subsection x86 Parameters
@cindex x86 parameters
The following choices of @var{name} are available on i386 and x86_64 targets:
@table @gcctabopt
@paindex x86-stlf-window-ninsns
@item x86-stlf-window-ninsns
Number of instructions above which STFL stall penalty can be compensated.
@paindex x86-stv-max-visits
@item x86-stv-max-visits
The maximum number of use and def visits when discovering a STV chain before
the discovery is aborted.
@paindex ix86-vect-unroll-limit
@item ix86-vect-unroll-limit
Limit how much the autovectorizer may unroll a loop.
@paindex ix86-vect-compare-costs
@item ix86-vect-compare-costs
Whether x86 vectorizer cost modeling compares costs of different vector sizes.
@end table