NIST

little-o notation

(definition)

Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = o(g(n)) means f(n) becomes insignificant relative to g(n) as n approaches infinity. The notation is read, "f of n is little oh of g of n".

Formal Definition: f(n) = o(g(n)) means for all c > 0 there exists some k > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ k. The value of k must not depend on n, but may depend on c.

Generalization (I am a kind of ...)
big-O notation.

See also ω(n).

Note: As an example, 3n + 4 is o(n²) since for any c we can choose k > (3+ √(9+16c))/2c. 3n + 4 is not o(n). o(f(n)) is an upper bound, but is not an asymptotically tight bound.

Strictly, the character is the lower-case Greek letter omicron.

Author: PEB

More information

Little o is a Landau Symbol.


Go to the Dictionary of Algorithms and Data Structures home page.

If you have suggestions, corrections, or comments, please get in touch with Paul E. Black.

Entry modified 17 December 2004.
HTML page formatted Tue Dec 6 16:16:32 2011.

Cite this as:
Paul E. Black, "little-o notation", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/littleOnotation.html

to NIST home page