Model of computation

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model that describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized.[1] The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.

Categories

[edit | edit source]

Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.

Sequential models

[edit | edit source]

Some of these models have both deterministic and nondeterministic variants. Nondeterministic models are used in the study of computational complexity of algorithms.

Models differ in their expressive power; for example, each function that can be computed by a finite-state machine can also be computed by a Turing machine, but not vice versa.

In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above-mentioned Turing machine model.

See also

[edit | edit source]

References

[edit | edit source]
  1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).

Further reading

[edit | edit source]
  • Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  • Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).