Weitere Beispiele werden automatisch zu den Stichwörtern zugeordnet - wir garantieren ihre Korrektheit nicht.
This can be done efficiently in linear time and in-place algorithm.
There are in-place algorithms for external sort, which require no more disk space than the original data.
Heapsort is an in-place algorithm, but it is not a stable sort.
This small change lowers the average complexity to linear or O(n) time, and makes it an in-place algorithm.
This is feasible for long sequences because there are efficient, in-place algorithms for converting the base of arbitrarily precise numbers.
As described, counting sort is not an in-place algorithm; even disregarding the count array, it needs separate input and output arrays.
It is also an in-place algorithm, requiring only constant memory overhead, since the tail recursion can be eliminated with a loop like this:
Of special interest is the problem of devising an in-place algorithm that overwrites its input with its output data using only O(1) auxiliary storage.
This in turn yields in-place algorithms for problems such as determining if a graph is bipartite or testing whether two graphs have the same number of connected components.
In computational complexity theory, in-place algorithms include all algorithms with O(1) space complexity, the class DSPACE(1).
Functional programming languages often discourage or don't support explicit in-place algorithms that overwrite data, since this is a type of side effect; instead, they only allow new data to be constructed.
In computer science, an in-place algorithm (or in Latin in situ) is an algorithm which transforms input using a data structure with a small, constant amount of extra storage space.
However, most implementations require O(log n) space to keep track of the recursive function calls as part of the divide and conquer strategy; so Quicksort is not an in-place algorithm.
Note that it is possible in principle to carefully construct in-place algorithms that don't modify data (unless the data is no longer being used), but this is rarely done in practice.
Mainly because of the importance of fast Fourier transform (FFT) algorithms, numerous efficient O(n) in-place algorithms for bit-reversal and digit-reversal permutations have been devised.
(The results are in the correct order in X and no further bit-reversal permutation is required; the often-mentioned necessity of a separate bit-reversal stage only arises for certain in-place algorithms, as described below.)
On the other hand, sometimes it may be more practical to count the output space in determining whether an algorithm is in-place, such as in the first reverse example below; this makes it difficult to strictly define in-place algorithms.
Since we no longer need , we can instead overwrite it with its own reversal using this in-place algorithm which will only need constant additional storage for the auxiliary variables and , no matter how large the array is.
An algorithm is said to be an in situ algorithm, or in-place algorithm, if the extra amount of memory required to execute the algorithm is O(1), that is, does not exceed a constant no matter how large the input.
Similarly, there are simple randomized in-place algorithms for primality testing such as the Miller-Rabin primality test, and there are also simple in-place randomized factoring algorithms such as Pollard's rho algorithm.
A typical strategy for in-place algorithms without auxiliary storage and without separate digit-reversal passes involves small matrix transpositions (which swap individual pairs of digits) at intermediate stages, which can be combined with the radix butterflies to reduce the number of passes over the data.
There is no known simple, deterministic, in-place algorithm to determine this, but if we simply start at one vertex and perform a random walk of about 20n steps, the chance that we will stumble across the other vertex provided that it's in the same component is very high.
In space performance, spreadsort is worse than most in-place algorithms: in its simplest form, it is not an in-place algorithm, using O("n") extra space; in experiments, about 20% more than quicksort using a c of 4-8.