Pular para o conteúdo

Conheça Walt Disney World

Talk:Algorithm

Former featured article Algorithm is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophy This article appeared on Wikipedia's Main Page as Today's featured article on July 20, 2004.
          This article is of interest to the following WikiProjects:
Wikipedia Version 1.0 Editorial Team / v0.5 / Supplemental
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
Taskforce icon
This article has been selected for Version 0.5 and subsequent release versions of Wikipedia.
 
 GA  This article has been rated as GA-Class on the quality scale.
Taskforce icon
This article is one of the supplemental core articles, which every encyclopedia should have.
WikiProject Computing (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
WikiProject Mathematics     (Rated B-Class)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating: B Class Top Priority Field: Basics
One of the 500 most frequently viewed mathematics articles.

This article has comments here.

WikiProject Computer science (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
This article has an assessment summary page.
·h·w·Stock post message.svg To-do:
  • Cleanup the article.

-- moved to section in talk page --

  • Rewrite the history section
    • Ancient algorithms (Babylonians, Euclid, Sieve)
    • Formalization -- done -- (Godel's and Herbrand's λ-calculus (cf footnote in Church's paper, p. 90 in Undecidable ), Church's theorem (1936) (p. 88ff in Undecidable), Post's "process" (1936) (p. 289ff in Undecidable), Turing's machine (1936-1937) (p. 116ff in Undecidable), J.B. Rosser's definition of "effective method" in terms of "a machine" (1939)(on p. 225-226 in Undecidable), Kleene proposing the "Church-Turing thesis" based on the papers of Church and Turing cited here (1943) (p. 273-274 in Undecidable)

Archives

Contents

On the introduction

I think that in this article basics should be included for beginners.Link:logarithms,Euclidean vectors

Link to AllMathWords.org Encyclopedia.

I feel it is appropriate to include a link to http://www.allmathwords.org/algorithm.html in the Wikipedia article.

The Wikipedia guidelines state:

What should be linked

3. Sites that contain neutral and accurate material that cannot be integrated into the Wikipedia article due to copyright issues, amount of detail (such as professional athlete statistics, movie or television credits, interview transcripts, or online textbooks) or other reasons.

The reason the linked article should be perpemitted is that it is intended for a different audience that the Wikipedia article. The Wikipedia article is written at a college level. The All Math Words Encyclopedia is written for U.S. grades 7-10. The Wikipedia ariticle will be confusing to most 12-14 year olds. The All Math Words Encyclopedia article keeps the information and examples reachable by the target audience. —Preceding unsigned comment added by 76.27.81.30 (talk) 14:00, 5 October 2008 (UTC)

I don't have a problem with the content of this link (altho it does refer, circularly, back to the wikipedia article). I would not have a problem with creating a sub-section of "References" with a title such as "For younger readers" and this link entered. Maybe Silly Rabbit would be willing to state its objections and its reversion. Bill Wvbailey (talk) 21:49, 5 October 2008 (UTC)

The subject is also covered in the Simple English Wikipedia. See here. pgr94 (talk) 16:30, 6 December 2008 (UTC)

I was totally unaware of it. What a nice addition to wikipedia! Do you have an opinion about putting a link to this simple english version and/or to the All Math Words Encyclopedia? Bill Wvbailey (talk) 18:31, 7 December 2008 (UTC)
I reverted the addition of the AllMathWords link because it appeared to be spammed across many articles in a short period of time, and the entries do not seem to provide a resource beyond that of a well-written article. (The link under discussion has virtually no content at all.) So I don't feel the link belongs here, but I'm not going to argue about it if you think it should be added back here. As for the simple English, note that there is already an interwiki. siℓℓy rabbit (talk) 18:57, 7 December 2008 (UTC)

Algorithm equivalence and normal forms

Is it possible to determine if two programs code for the same algorithm? For example, if an algorithm is expressed in two different languages can they be mapped back the same algorithm? More concretely, given implementations of quicksort in C, Java, Lisp and Prolog is it possible to determine (automatically) that the programs represent the same algorithm? This, I suppose, is the same as asking if there is a canonical form for expressing algorithms.

It seems like a fundamental question that I didn't see covered in the article. pgr94 (talk) 12:25, 26 October 2008 (UTC)

A really interesting question. My guess is: formally, the matter is intractable in the same manner as the Busy beaver problem. But some thoughts: (I'm just a assembly-language coder (8085 and MC68HC), wrote a "universal program" for a home-built Post-Turing machine as well as a zillion little counter machine algorithms etc.) What I've observed is that the front-end "specification", if it is precise enough, usually sets up the task of "coding" which thereafter you just crank out. It is in the specification that we'd find the similarities -- the code will be unique for the instance but fall into a general classification or classifications. I'm thinking here of when I use a "state machine" design (but this devolves into the busy-beaver problem), for example, as opposed to "random code". Or a parsing table+indirect addressing rather than just a bunch of "case" tests (I've done both). This gets into the notion of a "toolbox" of generalized algorithms -- Knuth's cookbooks of algorithms, for example. There must be some oneone working on this sort of thing. The name E. J. McClusky at Stanford comes to mind. Bill Wvbailey (talk) 15:23, 26 October 2008 (UTC)
There is a very general theorem, Rice's theorem, that says that there is no nontrivial property of a partial computable function, such that the set of programs that compute a function with that property is decidable. In particular, there is no way to decide in general if the function computed by a given program is equal to some predetermined function known in advance. — Carl (CBM · talk) 22:57, 26 October 2008 (UTC)
Ok, the wording in Rice's theorem should talk about "the set of programs that compute a function with that property" rather than given/one (I reverted your last edit and it looks like you are the person to write better wording). Derek farn (talk) 01:36, 27 October 2008 (UTC)
Correct me if I'm wrong, but Rice's theorem doesn't rule out the use of "heuristics" on specific instances of programs; Rice's theorem has to do with a generalized mechanical procedure extended to all possible programs-as-inputs. It would be quite possible for a Carnivore-like monster program to crunch through an instance of "a program" looking for "pattern matches", perhaps simulating it; sometimes it would succeed, sometimes not, and if not it would "time out." All of these programs are finite in nature, and their inputs are finite as well -- so they are finite state machines and hence theoretically "tractable", but in practicality "intractable" in the sense of the busy beaver problem. Rice's theorem is really only applicable to a infinite-tape Turing machine . . . isn't this correct? Bill Wvbailey (talk) 14:42, 27 October 2008 (UTC)
You're morally right, but that solution is very impractical. If you specify a fixed finite amount of memory that the program in question can work with, then it is possible to analyze a program by "brute force", testing its execution on every possible input as if it were a finite state machine. Then you could assign the program a standard form as some other finite state machine. However, you can't do the analysis via any method that does not make use of an assumption of a fixed memory bound, because any general method would also apply in the case of unlimited memory, which is what Rice's theorem prevents. So apart from some limited heuristics, the "normal form" would not be obtained by direct syntactic manipulation of the original program; it would just be some more-or-less arbitrarily selected program that happens to have the same behavior.
For a quantitative estimate, suppose that the program takes a single 32 bit integer as input and uses no other external state information at all. Then there are 2^32 instances of the program that have to be brute-force tested in order to create a normal form. That would take about 50 days to do at 1000 instances/second. If there is any other external information that the program uses that could change, you have to run all those possibilities as well, which would increase the number of possibilities exponentially. — Carl (CBM · talk) 15:15, 27 October 2008 (UTC)

I'm not familiar with Rice's theorem and it does appear very general. Google led me to this article doi:10.1016/j.tcs.2006.07.021 which strikes me as quite relevant:

Prime normal form and equivalence of simple grammars
A prefix-free language is prime if it cannot be decomposed into a concatenation of two prefix-free languages. We show that we can check in polynomial time if a language generated by a simple context-free grammar is prime. Our algorithm computes a canonical representation of a simple language, converting its arbitrary simple grammar into prime normal form (PNF); a simple grammar is in PNF if all its nonterminals define primes. We also improve the complexity of testing the equivalence of simple grammars. The best previously known algorithm for this problem worked in O(n13) time. We improve it to View the MathML source and View the MathML source time, where n is the total size of the grammars involved, and v is the length of a shortest string derivable from a nonterminal, maximized over all nonterminals.

If we can establish a normal form for context-free grammars then wouldn't it suggest Rice's theorem is not a problem? pgr94 (talk) 15:59, 12 November 2008 (UTC)

Context-free languages are only a small subset of all decidable languages. So even if there is a normal form for context-free grammars, this does not imply a normal form for arbitrary grammars. — Carl (CBM · talk) 17:54, 12 November 2008 (UTC)
True, but aren't context-free grammars a good start: "Context free languages are the theoretical basis for the syntax of most programming languages."[1] pgr94 (talk) 18:17, 12 November 2008 (UTC)
Yes, they are a good start. — Carl (CBM · talk) 03:05, 13 November 2008 (UTC)

Can an algorithm truly result in non-deterministic behavior?

I disagree with the idea that some algorythms are non-determinist with the only example being a randon( acually pseudo random) number generator. Randomness is a property of physical systems not mathmatical ones. In the actual implementation of these algorythms for random number generation, the variables are set before running there for if the same inputs were applied they would produce the same random number. More properly its a hueristic.

Linuxguru1968 (talk) 00:32, 25 February 2009 (UTC)

This is an interesting question, with an interesting rebutal of the conventional P.O.V.. I believe it deserves a response. My first response is to agree with Linuxguru1968 -- all random number generators that I know of use a non-algorithmic process (e.g. time of day, zener-diode shot noise, etc) to "seed" the generator, but the generator itself is deterministic (i.e. it uses computational (integer) processes in a finite mechanism). Does anybody (hopefufully more knowledgable than myself) have an opinion? Bill Wvbailey (talk) 03:39, 25 February 2009 (UTC)
I don't see how random number generators are related here. We are not talking about randomized algorithms, but about non-deterministic algorithms. Or are we? The better way to think of nondeterministic algorithms is that they run all the possible paths in parallel, not that they pick a random path to follow.
About "non-deterministic algorithms" it's really a matter of language. The recursion theory definition of algorithm requires them to be deterministic. However in computer science the term algorithm is used slightly more broadly than that. In the context of Turing computability there is no gain to non-determinism, but in some weaker classes of languages it makes a difference (see pushdown automaton and NP). — Carl (CBM · talk) 04:50, 25 February 2009 (UTC)
See also Deterministic_algorithm#What_makes_algorithms_non-deterministic.3F. pgr94 (talk) 15:05, 26 February 2009 (UTC)

GA Reassessment

This discussion is transcluded from Talk:Algorithm/GA1. The edit link for this section can be used to add comments to the reassessment.

I'm reassessing this article for GA sweeps, to double-check all old GAs. This is the first part where i do a skim-check, then part two, the full review, may occur later. There's one major thing I notice with this article: The references are all in MLA format. They need to be converted to wiki format. I'll give you five days to fix this. If it is, I'll do a full review of the article. If not, I'll delist it. Wizardman 22:14, 18 August 2009 (UTC)

I'm not entirely sure what this means. Could you explain what you mean by MLA vs wiki? And also, could you point to which GA guideline the current referencing style infringes? Thanks. RobHar (talk) 22:44, 18 August 2009 (UTC)
I mean right now all the references are (Name: #-#) format, when they really should not be. They should be in either Harvard referencing style or in a style where clocking it can bring the viewer straight to the reference. Wizardman 22:48, 18 August 2009 (UTC)
WP:CITE says that any consistently-used format is acceptable. Moreover, choice of style alone is not a sufficient reason for changing an established referencing style. However, I would rather see this article delisted if it will prevent this sort of discussion in the future. — Carl (CBM · talk) 00:10, 19 August 2009 (UTC)
It's also true that the refs in this article are not consistent, although "(Kleene 1967:45)" seems to be the predominant style. They should be consistent, at least. — Carl (CBM · talk) 01:15, 19 August 2009 (UTC)
Wizardman, the GA criteria do not specify any particular style of citation formatting. You are supposed to be assessing this article against the GA criteria, not your own personal preferences. --Malleus Fatuorum 00:18, 19 August 2009 (UTC)
Actually, I do favor delisting this article at this time, due to concerns about the content, especially in the lower sections. Having a review underway makes it harder to do significant editing. Once the content is improved, we can always have the article reviewed again. — Carl (CBM · talk) 00:32, 19 August 2009 (UTC)
  • Unless I've missed something in WP:CITE, there's nothing wrong with MLA format (although I personally find it a pain). The only minor issue is that it is a bit inconvenient to see an inline reference and then have to scroll down to the bibliography to search for the proper full citation (no linking function as with the ref tags). Again, though, it's a minor issue, and should not be used to pre-empt a proper review, especially at the GA level. Dabomb87 (talk) 01:13, 19 August 2009 (UTC)

The GA requirements require that citations exist, not that be Harvard, Turabian, APA, MLA, or any specific style. This is not a reason to delist the article. -- Avi (talk) 19:57, 19 August 2009 (UTC)

  • Someone else review the article then. I still think the way the cites are is an issue but i don't care enough about this to fight it. Wizardman 03:42, 20 August 2009 (UTC)

WP:CITE#HOW says that "Any of these styles is acceptable on Wikipedia so long as articles are internally consistent." I think the current article is not internally consistent. For example, references with page numbers have been formatted using at least six different styles: (Smith 2008:1), (Smith 2008, p. 1), (Smith 2008, p. 1) with hyperlink, (Smith:1), (Smith, p. 1), and (Smith, p.1). I do not agree with the original verdict that the MLA style is bad – besides, I didn't even notice a single example of pure MLA style, which would be (Smith 1). However, I agree that mixing up all these different styles is not appropriate for a GA. — Miym (talk) 22:07, 20 August 2009 (UTC)

  • The history section is way long and full of not incredibly important stuff. Please move it to a separate article and summarize its main points. Pcap ping 01:26, 22 August 2009 (UTC)
There already is a split-out article called Algorithm characterizations that deals with much of the history. The history section in the article is appropriate length (and was added to address an article-improvement suggestion a number of years ago). I'm going to revert this silly banner above the page. Bill Wvbailey (talk) 03:33, 22 August 2009 (UTC)
  • Some footnotes do not correspond to any discernible reference, for instance "Valley News, p. 13". Pcap ping 18:09, 22 August 2009 (UTC)
Fixed.Wvbailey (talk) 14:52, 7 September 2009 (UTC)
  • The very 1st reference, "al-Daffa 1977" that's supposed to cover the etymology is also non present. I can't be bothered with much more checking, except to say that I find the prose really convoluted: the repeated direct references to the sources become very distracting. You already know my opinion about the marginal value of much of the information in this article, which only serves to dilute important stuff. Computability is not mentioned anywhere except in references for instance. The article does not introduce itself as chiefly an etymological or historical treatise, but that's what it is... I would delist it, but I'm irrelevant around here. Pcap ping 18:52, 22 August 2009 (UTC)
The spelling of the al-Kwhowârizmi's name, and the exact historical details (i.e. scholarship, truth) has been a matter of great contention in this article; what the reader sees now has been reverted, altered and messed with probably 100 times; all that I can do is quote this from Knuth (The Art of Computer Programmeing, Vol. 1 2nd Edition: Fundamental Algorithms 1968, 1973:1-2); he gives no source for this that I can find: "Finally, historians of mathematics found the true origin of the word algorism: it comes from the name of a famous Persion textbook author, Abu Ja'far Mohammed ibn Mûsâ al-Kwhowârizmi (c. 825) -- literlly, "Father of Ja'far, Mohammed, son of Moses, native of Kwhowârizm." Khowârizm is today the small Soviet city of Khiva. Al-Khowârizmi wrote the celebrated book Kitab al jabr w'al-mugabala ("Rules of restoration and reduction"); another word, "algebra," stems from the title of his book, although the book wasn't really very algebraic. ¶ Gradually the form and meaning of "algorism" became corrupted; as explained by the Oxford English Dictionary, the word was "erroneously refashioned by "learned confusion" with the word arithmetic. The change from "algorism" to "algorithm" is not hard to understand in view of the fact that people had forgotten the original derivation of the word. An early German mathematical dictionary, Vollstandiges Mathematishes Lexicon (Leipzig, 1747), gives the following definition for the word Algorithmus: "Under this designation are combined the notions of the four types of arithmetic calculations, namely addition, multiplication, subtraction, and division." The latin phrase algorithmus infinitesimalis was at that time used to denote "ways of calculation with infinitely small quantities, as invented by Leibnitz." ¶ "By 1950, the word algorithm was most frequenty associated with "Euclid's algorithm,", a process for finding the greatest common divisior of two numbers which appears in Euclid's Elements (Book 7, Propositions 1 and 2.) It will be instructive to exhibits Euclid's algorithm here: [etc]." Knuth (pages 225-227) offers an interesting history of various computational notions such as "subroutines", "coroutines", "interpretive routine", "buffering", "semaphores" and "concurrent process". Wvbailey (talk) 15:30, 7 September 2009 (UTC)

Original classifcation

While the various type of alogorithms such as linear programming, dynamic programming, etc. cetrainly exist, the taxonomical division in "by implementation" and "by design paradigm" appears WP:OR to me. Please provide a citation that someone uses those two categories containing the classes listed in this wiki article. Pcap ping 14:20, 7 September 2009 (UTC)

This is not "original research". The distinction of "by implementation" vs "by design paradigm" is quite obvious and unoriginal (hence, not "research", and certainly not "original"). The described terms are as-used in the field of computer science, and have been for at least the last 20 years. While the author's taxonomy might be strange to you, I assure you that most people with degrees in mathematics or computer sciences suffer no confusion. --68.83.74.86 (talk) 01:11, 16 January 2010 (UTC)
In Wikipedia, it is not enough that someone assures something. You have to provide references to reliable sources. See WP:V. — Miym (talk) 10:53, 16 January 2010 (UTC)

I think one of the problems is that "by implementation" seems to mix up very different kinds of classifications. For example:

  • The distinction between serial/parallel/distributed algorithms is related to different models of computation. Other similar examples might be quantum algorithms. These are really apples and oranges, not just some kind of implementation details (for example, if you need a distributed algorithm, a quantum algorithm is of extremely little use).
  • Recursion and iteration might be plausibly called design paradigms (or, indeed, implementation details). These are convenient techniques of designing and analysing algorithms. They don't really affect the external behaviour of the algorithm (perhaps the complexity but not the output).
  • Exact vs. approximate is related to the output of the algorithm; it is clearly visible to the user of the algorithm.

So we have already three fundamentally different categorisations lumped together: (1) on which platform you can use the algorithm, (2) what goes on inside the algorithm, and (3) what does the algorithm output. Hence I'd be surprised if there really were reliable sources that used this kind of taxonomy. — Miym (talk) 11:31, 16 January 2010 (UTC)

I've remove the {{or}} tags. Is this section just referring to a formal classification system, or using it as a convenient textual device to discuss the types of algorithm. Perhaps the section could be renamed "types of algorithm" or something similar.--Salix (talk): 07:36, 20 September 2010 (UTC)

Dubious

How is classification by "computing power" different that the complexity classes? Pcap ping 14:25, 7 September 2009 (UTC)

Good question, I vote remove this. Ben1220 (talk) 10:39, 1 December 2009 (UTC)
I agree, deleting. pgr94 (talk) 15:50, 15 January 2011 (UTC)

Missing topics

This article makes no mention randomized algorithms or of quantum computing/quantum_algorithm so it's stuck in the 1980s or so. Rather unacceptable for a GA. Pcap ping 14:42, 7 September 2009 (UTC)

Randomized algorithms were barely mentioned as "probabilistic algorithms", a terminology that is not commonly found in publications after 1990; I've added some basic details as well. Pcap ping 15:14, 7 September 2009 (UTC)

Algorithms + Data Structures = Programs

I'm surprized, there is a very famous equation missing:

  • Algorithms + Data Structures = Programs
author = {Wirth, Niklaus},
title = {Algorithms + Data Structures = Programs},
year = {1978},
isbn = {0130224189},
publisher = {Prentice Hall PTR},

more: a paragraph about "Data Structures" should be written (variable, array, list, graph, ...), and some short description of the main "Control Structures" (sequence, if-then-else, loop, ...) —Preceding unsigned comment added by 86.200.233.182 (talk) 09:17, 28 December 2009 (UTC)

Definition in the opening sentence.

Wouldn't it be better to say that an algorithim is an effective method for completing a task, such as solving a problem.... rather than saying its an effective method for solving a problem? While some definitions of problem could certainly encompass the examples of algorithims I have in mind (hashes et cet) it certainly doesn't seem to suggest these meaning in the context of the article- which includes such problems as fixing a broken lamp et cet that fit the more typical, narrow, definition of problem. I think the definition now given is unclear and tends to be interpreted as exluding simple instruction sets whose completion don't accomplish another task, such as the "fixing a broken lamp" example does (it fixes the lamp which is an outcome independently defined).--Δζ (talk) 05:03, 5 March 2010 (UTC)

Interesting question. I am going to assert that an algorithm has to do only with computation procedure (this includes the human act of calculation), i.e. the result of a computation is the output. This requires a definition of "computation procedure". To support this premise, two examples from the literature pop to mind immediately:
(1) "Turing's work gives an analysis of the concept of "mechanical procedure" (alias "algorithm" or "computation procedure" or "finite combinatorial procedure"). This concept is shown to be equivalent with that of a "Turing machine"."(from Goedel's 1964 Postriptum to his 1934 Princeton lectures, cf Davis 1965:71)
(2) "12. Algorithmic theories. . . . In setting up a complete algorithmic theory, what we do is to describe a procedure performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definitive answer, "Yes" or "No," to the question, "is the predicated value true?" (Kleene 1943 in Davis 1965:273)
So, as I assert the above implication: "algorithm --> "computation procedure"" as true, then what does "computation procedure" imply? I will go out on a limb to assert that it implies (i) the appearance on a "medium" (paper, neurons, magnetic strip, flip-flops and logic gates, ROM or EPROM memory, etc etc) a concatentation of symbols (marks) that can be, if necessary, placed in one-to-one correspondence with the integers, AND (ii) an effective evolution of the symbols per a (very-)finite set of rules, AND (iii) marks (symbols) as output, and (iv) an "effective" target machinery (man or mechanical). The question that is a matter of contemporary debate is whether an "algorithm" alias "computation procedure" includes the physical machinery iself (mechanical or human). My take is that most folks believe that "algorithm" is only the concatenated marks that are "effectively readable" and "effectively executable" by the target machinery (i.e. humans, computers, whatever . . .). Others believe that the machinery (humans, computers) are part of the algorithm itself because here is where the rules are to be found; otherwise the algorithm must include the rules in a form "effective" with respect to the machinery (human or otherwise).
So given the above, is a cake-recipe an algorithm? Are instructions for changing a bulb an algorithm? Only if we convince ourselves that they represent "computation procedures". Simplistically, they fail because their "output" is not symbols, but rather a analog (as opposed to digital) cake and a changed light-bulb. However, I would argue that part of the process of baking a cake or changing a light bulb is algorithmic in nature -- this would be the symbol-manipulation aspect of the human- or robot-mind involved in reading the recipe or recalling the learned or a priori instructions (i.e. recalled from neurons or ROM).
If it were up to me I would restrict the lead paragraph to say that an algorithm is that which directs an effective agent (man or machine) to perform an effective computation procedure. In other words, I am restricting the output to "numbers" (symbols and symbol-strings one-to-one mappable to the integers). But I'm open to debate. Bill Wvbailey (talk) 16:36, 5 March 2010 (UTC)

For the origin of the name to match the article http://en.wikipedia.org/wiki/Algorism i added some words - 13dec2010 —Preceding unsigned comment added by 79.129.49.66 (talk) 08:39, 13 December 2010 (UTC)

Please add example

There are a Effective method that is not a Algorithm?

(then we can see if need to merge or not) —Preceding unsigned comment added by 187.39.184.57 (talk) 12:57, 8 May 2010 (UTC)

The Church–Turing thesis states that the two notions coincide. No counterexample to this thesis is known (and so we can't satisfy your request and add one), but neither has it been proved.  --Lambiam 09:22, 2 October 2011 (UTC)
I beg to disagree. The terms effective method and algorithm are synonymous. The Church-Turing thesis is that these two notions coincide with a third notion (Turing-computability, or Lambda-calculus computability, as they are equivalent). Check any Computability textbook. I second the merge proposal. AmirOnWiki (talk) 16:38, 12 October 2011 (UTC)
Indeed; incorrect statement struck out.  --Lambiam 19:00, 12 October 2011 (UTC)

-- The effective method article should be called "effective calculability", not "effective method". There's a difference, hence the confusion. I know of at least one "effective method" that uses analog techniques, aka a feedback loop to control voltage or current (been there done it), as opposed to an "effective computation" that does the same thing using digital techniques (aka microprocessor been there, done that too), and an "effective method" to create random numbers -- one that uses zener diodes to create random noise (been there, done it) as opposed to a digital method using pseudorandom number generators (been there done it both on spreadsheets and microprocessors). More precisely:

For all "effective methods" whatever, "algorithm" --> "effective method", but "it's not always the case that: "effective method" --> "algorithm".

Note carefully here: I wrote method, not "calculable/computable function". Here are some references in particular Church 1935, refers to "effective calculability". The compilation The Undecidable has the phrase in the index listed 10 times (Davis 1967). The following is Church's Thesis, verbatim:

"7. The notion of effective calculability. We now define the notion, already discused, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or of a lambda-definable function of positive integers). This definition is thought to be justified by the considerations which follow, so far as positive justification can ever be obtained for the selection of a formal definition to correspond to an intuitive notion.
" It has alrady been pointed out that, for every function of positive integers which is effectively calculable in the sense just defined, there exists an algorithm for the calculation of its values.
"Conversely it is true, under the same definition of effective calculability, that every function, an algorithm for the calcuation of the values of which exists, is effecitvely calculable." (Church 1935 in Davis 1967:100).

My Kleene 1952 cites the notion in the appendix as follows:

"effectively: calculable function, decidable predicate 199, 300, cf Church's thesis, decision problem; enumerable set of formulas 398; true formula 465." (Kleene 1952:541).

I not surprised that, in the intervening years the notion has drifted because people are confusing method (mix one pound of flour +/-2 ounces, one 5/16 cup water (+/-1/16 cup), 1 tablespoon yeast (+/-1/8 tbs), 1 teaspoon salt (+/-1/8 tsp), mix let rise, beat down, knead, let rise 1 hour +/-15 min, beat down, cut into 2 loaves of roughly-equivalent size, let rise in pans for 1 hour +/- 15 minutes, bake at 375 deg F +/-20 F for 25 minutes spraying with water 3-5 times at approx 5 minute intervals, etc etc) -- the analog "formula" for french bread -- with discrete computation (based on Platonically-precise units, not analog measurements). See also the first chapter in Knuth (cf page 6) who makes a similar comparison under his section '5) Effectiveness: An algorithm is also generally expected to be effective. This means that all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done by a man with paper and pencil."(page 6). Note that this rules out effective methods for baking break or creating true random noise and controlling analog feedback loops. Bill Wvbailey (talk) 21:17, 12 October 2011 (UTC)

Boolos & Jeffrey quote not relevant

The "Why algorithms are necessary: an informal definition" section has a quote from Boolos & Jeffrey. However, it is from a book that does not even contain the word "algorithm." I don't believe this quote is relevant (they're actually leading up to the notion of Turing machines), and implying that they were writing about algorithms seems misleading at best. I think we should remove the quote and its discussion. What do you think?

Google Books link searching for the word "algorithm" in that book: http://books.google.com/books?id=8BY9AAAAIAAJ&pg=PA19&dq=no+human+can+write+fast+enough&hl=en&sa=X&oi=book_result&ct=book-preview-link&resnum=1#v=onepage&q=algorithm&f=false

- Alan, 3 October 2010  —Preceding unsigned comment added by 71.111.255.254 (talk) 05:32, 3 October 2010 (UTC) 
It is more than relevant. It is explaining how a process defined by a finite string of symbols, aka an algorithm, is used to specify how to compute (calculate, thus specify) a potentially infinite range given an infinite domain of integers. Thus it represents a compression of a mere potentially-infinite listing of all the possible numbers (the range), i.e. it represents a compression of an infinite listing into a very finite string of symbols (the algorithm itself). This notion is of extreme importance in topics such as Kolmogorov complexity. It is also the reason you can multiply any arbitrary numbers -- you're following a finite process that you learned in school so you can apply to any two of an infinite number of integers. And even if this quote were leading up to a description of Turing machines, so what? By the Church-Turing thesis, any (every) algorithm can be converted into a Turing machine program. Bill Wvbailey (talk) 14:55, 3 October 2010 (UTC)

Funny quote

Uttered by Anti Defamation League: "Google employs technology that automatically ranks sites based on a complicated formula called an algorithm." DYK? Loggerjack (talk) 19:51, 10 December 2010 (UTC)

The section 'Why algorithms are necessary' doesn't seem to explain "why algorithms are necessary"

When I red the title I expected to find an explanation to why we have to use algorithms instead of just using singular expression. I understand that it would be hard to do that, but I don't really understand 'why' in a mathematical sense. Can all algorithms be translated to expressions? If not, why? Has this question been answered? Has it been asked? By whom? Etc... Have I missed a part of the article here?

Recent revert

I am quite worried by recent revert without stating reasons (not to say about a kinda disrespectful tone). I deleted big chunks of chaotic unreferenced and dubious text and changed some phrasing. If you see value in some deleted pieces, please specify what is is and please provide references, according to wikipedia rules. I would have let it go if you were defending a well-thought and well-written and well-referenced text, but this text is quite messy, and I don't see any ongoing serious effort. Loggerjack (talk) 01:29, 15 January 2011 (UTC)

For example, the phrases like "We can derive clues to the issues involved and an informal meaning of the word from the following quotation" are good for an essay, but not for encyclopedia. Wikipedia is not about "clues" but about facts, which are verifiable from references. Loggerjack (talk) 01:38, 15 January 2011 (UTC)

You're a bit of a newbie. This edit-revert busines is rather standard protocol. You have attempted to make changes, without presenting suggestions for them on the talk page first. Then someone like myself comes along and reverts you. At least now you're on the talk page, discussing your complaints. This gives other editors time to respond. And for you or others to recruit responses without distorting an article that has been in place for quite a long time without your edits.
Rome wasn't built in a day, nor will be this article. I'm going to revert you again. This is revert #2. I'm keeping count. As you may have noticed by looking at the history, I'm not the only one to revert your changes.
What I suggest is you list your complaints here, first. BillWvbailey (talk) 03:20, 15 January 2011 (UTC)
Hi Loggerjack, Welcome to Wikipedia. I saw the edits/discussions and would like to suggest a slower, more friendly pace in your edits. They say "anyone can edit Wikipedia" yet just as anyone can walk on a sidewalk, we should all be careful not to step on other toes/edits/text/feelings. It may help to read Wikipedia:STATUSQUO and WP:BRD as well. Your cooperation will be appreciated. History2007 (talk) 04:06, 15 January 2011 (UTC)
I don't agree with all of Loggerjack's edits (I reverted one myself) but I agree with his opinion on the sentence mentioned above: "We can derive clues to the issues involved and an informal meaning of the word from the following quotation" It does sound like original research. pgr94 (talk) 11:04, 15 January 2011 (UTC)
Well, discussion of content is one thing, the tone of that discussion is another. If the tone calms down it is always better. I am not going to be watching this page any more (I came here by chance) but now that I am here let me suggest that the History of Algorithms becomes a separate page with a Main link. As is the article is too long to need that, and the etymology is lost way down. Overall, not a bad article in fact and although it can be improved no need to rip it all apart. Anyway, I am off to other lands now. Cheers. History2007 (talk) 13:53, 15 January 2011 (UTC)
I had a look at Loggerjack's edits when they were done and I thought they were good ones. I've read them over again since they were reverted and it went to discussion and I still can't see any good reason to revert them. The smaller edits were fine and the big section was I though scrappy and unnecessry and uncited and better off removed. Editing first and then being reverted by someone if they object is good editing practice unless the edit is obviously contentious, that's what WP:BRD is all about. Now that it is at the discuss stage - perhaps someone could discuss improving the article rather than getting engaged in an edit war? Otherwise I'll start reverting on behalf of Loggerjack if you really want to start counting reverts. Thanks Dmcq (talk) 15:16, 15 January 2011 (UTC)
Per Loggerjack's complaints I wordsmithed the informal definition section and added a host of references. I don't know what to think about the long section re the CASE statements. I added and cleaned up an editor's addition, but there is plenty of sourcing for the importance of the CASE statment re McCarthy formalism and Marvin Minsky's discussion of this in his 1967. It could be shortened. But I wouldn't want to see its wholesale elimination. As for removing or messing with the etymology in the lead-paragraphs this has been so contentious that I've considered asking for a partial-protection. So this I'd suggest you leave alone except do Loggerjack's move of the Greek ending down into the etymology. If someone could suggest a shortened history I'd say that a move of most of it to its own extension-page wouldn't hurt. BillWvbailey (talk) 16:29, 15 January 2011 (UTC)
The case statement might be important, but it is pretty unimportant and unnecessary in this context. The implementation of it on a computer requires quite a bit of work in even the simplest cases. If there is going to be a computer example it should be nice and simple, and preferably an example used elsewhere as a simple illustration of a computer algorithm. Dmcq (talk) 19:29, 15 January 2011 (UTC)
You are probably right. This is one of those sections that just "evolved" without any broad plan behind it. Given that this section exists at all, what should it contain? Before an example algorithm can be given, the section needs a computer structure and a language, like Knuth's assembly-language MIX computer, but much, much simpler. (I've never been a fan of the high-level "pidgen computerese" used in the sorting example-- it's so high-level that the notions of "precision" and "definiteness" are lost, nor is the sorting example simple. Stone 1973 starts off with a word-specification of this sorting algorithm . . . but only to use it to show why his example specification is incomplete). Stone also creates a computer (derived from an H-P 2116 computer) and an assembly language into which he casts his examples.
About the simplest algorithm is to add how to add "y" to "x". That's how Davis 1958:12 begins his examples of Turing programs; he then moves on to subtraction, then on to proper subtraction, finally ending with multiplication on page 12. These are (very) tedious because they're all in state-machine-based "Turingese". Eventually Davis came to use his Post-Turing machine model with numbered lines of "code" rather than state-machine "code". Boolos-Burgess (various editions) and Boolos-Burgess-Jeffrey 2002 follow in the footsteps of Minsky 1967 with first creating a counter machine model they call an "abacus" model -- a number of holes with arbitrary capacity, and an endless supply of pebbles to throw in the holes. It's actually Lambek's rocks-in-holes model oconsisting of two "action"-instructions and a "conditional" instruction: { INCREMENT hole n, DECREMENT hole n, IF hole n IS EMPTY THEN JUMP_TO_INSTRUCTION # ELSE NEXT INSTRUCTION, HALT}. B-B-J and Minsky write some simple algorithms with sequentially-numbered lines of this "code". This is about as simple as a machine and a language could be, and it certainly has a number of well-sourced adherents (B-B-J call this model "an idealized version of [a random-access storage] computer" p. 46). Bill Wvbailey (talk) 21:36, 15 January 2011 (UTC)
RE a really simple example that is referenced (i.e. to avoid accusations of O.R.): I've reviewed articles by Wang 1957, Melzak 1961, Lambek 1961, Shepherdson-Sturgis 1961 (hereafter S-S), Minsky 1967, Boolos-Burgess (editions 1-3), Boolos-Burgess-Jeffrey 2002 (their "abacus" machines), I think that the best hyper-simple computational "model" is the S-S model with the following instruction set:
{ INCrement n; DECrement n; CLeaR n; COPY (m,n); Jump address; Jump-if-Zero n, address }
Here I've used more-modern mnemonics (we could use Motorola/Freescale MCS05 assembly language if someone were to object) and n is an arbitrary "register" (hole in the ground) (cf S-S 1961:219). In fact, it is their word-description of how to eliminate this COPY instruction that I'd propose as the simple model:
"We now try to reduce the [6] instructions to a smaller and simpler set. The most obvious candidate for such a replacement is the copy instruction . . .. The subroutine which springs to mind for defining this in terms of the other instructions is to keep adding one into register n and subtracting one from register m until the latter is empty . . . unfortunately it destroys the original. We can avoid this by making two copies at once and afterwards copying one of them back into register m." (S-S 224)
If such an example is used it should be backed up with a flow-chart. With regards to the flowchart symbols to use and the appropriate program structures the (rather old) reference I have for how to proceed is Robert C. Tausworthe 1977 Standardized Development of Computer Software, Prentice-Hall Inc, NJ, ISBN: 0-13-842195-1. He observes (cf p. 100) that the 3 structures (a) DO f THEN g, (b) IF c THEN f ELSE g, and (c) WHILE c DO f are sufficient (with two references Böhm and Jacopini 1966 and Mills 1972). This might also be a good source for a "pidgin" high-level language (he extends the 3 canonical structures to include two more -- a DOWHILE, and a CASE). Bill Wvbailey (talk) 16:33, 17 January 2011 (UTC)

I am glad that my revert has brought attention tyo this page. BUt I don't buy the claim "This edit-revert busines is rather standard protocol. " If it is standard, then it is sad (and it is doble sad, if you don't know why it is sad). My edits were with comments. I fail to see why a wholesale revert without stating disagreements with my comments is a civilized editing practice. Anyway, I sill just keep in mind to stay away from editors like Wvbailey, who assumes he has some kind of seniority rights for article content, and "newbies" have to bow before him. Loggerjack (talk) 16:55, 17 January 2011 (UTC)

How about just doing Euclid's algorithm for Greatest_common_divisor, show the maths version and then a simple version without recursion in any common computer language for the computer version. There is no need to get tied in knots defining a special machine or anything like that. You just have to say 'in BASIC' or something like that for the computer version. Dmcq (talk) 20:13, 17 January 2011 (UTC)

Simple example: Euclid's algorithm

Dmcq's suggestion about displaying Euclid's algorithm in e.g. Basic (above section) is really interesting. After I wrote about Tausworthe's three minimal elements of a programming language (above section) I noticed how much like Basic it is. I haven't programmed in Basic in many years (since the last good dedicated interpreter/compiler disappeared). What would an algorithm for Euclid's algorithm look like, i.e. one that a person could run as a macro in Excel, perhaps? The Euclid algorithm is written in "pseudocode" in a couple intimidating chunks. BillWvbailey (talk) 20:42, 17 January 2011 (UTC)

I had look at a couple of BASIC versions and they differed greatly in syntax. Perhaps C is a better language as it is so widespread plus it is standardized. Dmcq (talk) 23:47, 17 January 2011 (UTC).

RE Basic variants: I cut my teeth on Dartmouth Basic (ca 1965) -- the "unstructured" version with line numbers and GOTOs. Then in the early 1980's just when the very first Macintoshs (e.g. the 256K, 512K, Enhanced etc) came out, I bought Mac version of a Microsoft Basic interpreter that would handle either the structured (without line numbers) or the unstructured (with line numbers and GOTO), a wonderful thing indeed. The reason I mentioned Excel macros is this would freeze the syntax, plus it would be easy to "test".

RE C: I've not used it professionally, so I've forgotten the syntax of all but the CASE instruction. As I attempted to teach myself C I found it unintuitive and difficut, nevertheless . . . .

If you or someone knows how to write the routine in C, it would be worth a look. Would the C version be amenable to a flowchart? (This has got me intrigued enough to want to program it in something, e.g. line-by-line in Excel). Bill Wvbailey (talk) 00:56, 18 January 2011 (UTC)

Well a straightforward example of C code for integers greater than 0 is given in [2] page 12 of Programming language pragmatics By Michael Lee Scott, this is just a subroutine it doesn't do I/O. Here is an ancient basic version that does read and write:
   10 REM Euclid's algorithm for greatest common divisor
   20 PRINT "Type two integers greater than 0"
   30 INPUT A%,B%
   40 IF B%=0 THEN GOTO 100
   50 IF A% > B% THEN GOTO 80
   60 LET B%=B%-A%
   70 GOTO 40
   80 LET A%=A%-B%
   90 GOTO 40
  100 PRINT A%
  110 END
These use an inefficient version which just does subtraction rather than a remainder operation but I believe that is how Euclid originally formulated it. There's also recursive versions which express the concept better but aren't immediately so flowchart like and like an algorithm. Dmcq (talk) 13:16, 18 January 2011 (UTC)

Nice, I like it: a reader could readily translate it into a counter-machine algorithm. I'll work on a flowchart. And try to get it to run as an Excel macro (after I buy a new VBA for Dummies book; I plugged it in and after a bit of disabling the in and out it didn't result in errors, but then again it didn't print out anything either). Thanks, BillWvbailey (talk) 18:08, 18 January 2011 (UTC)

(Reference for symbols and drawing style is Tausworthe 1977).

I mocked up your algorithm on an Excel spreadsheet to check it. It works (it takes two lines, an A-line =IF(B=0, "HALT", IF(A > B, A-B, A)) and a B-line =IF(B=0,"HALT", IF(A <= B, B-A, B)) and it goes across the page, A and B being the cells to the immediate left. You get to see all the intermediate results). Your algorithm is rather tricky/clever how it flips between subtracting A from B or B from A. Also, the IF-THEN box that does the "greater than" implies the other exit is "less than or equals"; I had this wrong in my first Excel mock-up.

The algorithm I found in my trusty Hardy and Wright 1979:180 requires you to start with A > or = B > 0. Then you find the residue r1, where A = q1*B + r1, then put B in the A-place and put the residue r1 into the B-place, i.e. B = q2*r1+r2, and repeat this process until the residue is 0; what is in the B-place is the g.c.d. The fact that there are two rather different algorithms that do the same thing is interesting in its own right. BillWvbailey (talk) 21:48, 18 January 2011 (UTC)

I found the same algorithm as Hardy and Wright 1979 in Knuth 1973 Volume 1 (2nd Edition):2. (That's page 2!). This is how he presents it:

Algorithm E (Euclid's algorithm). Given two positive integers m and n find their greatest common divisior, i.e., the largest positive integer which evenly divides both m andn.
E1. [Find remainder]. Divide m by n and let r be the remainder. (We will have 0 <,= r < n.)
E2. [Is it zero?]. If r = 0, the algorithm terminates; n is the answer.
E3. [Interchange.] Set m <-- n, n <-- r, and go back to step E1. ||

On page 4 Knuth adds the test-and-swap of m and n:

E0. [Ensure m >,= n.] If m < n, exchange m <--> n.

Would it be worth programming this and flowcharting it too, to make the point that more than one algorithm can be written to achieve exaclty the same task? There must be something pedagogically-useful in this point. Wvbailey (talk) 23:06, 18 January 2011 (UTC)Bill

That reference to a C example gives three different versions, the LISP and PROLOG versions use recursion and as I said more efficient versions get the reminder from a divide each time round rather than doing repeated subtracts. I was just thinking of something that illustrated that an algorithm was an effective procedure rather than just an existence statement or conditions for a solution to be correct. Nobody could say the GCD algorithm wasn't something one could actually follow one step at a time. Dmcq (talk) 00:14, 19 January 2011 (UTC)
Wvbailey, I am not too sure what you mean when you say that "there are two rather different algorithms that do the same thing". While this is indeed true in general, in the case of Euclid's algorithm, taking the remainder (of the division of A by B) or repeatedly dividing the lesser (B, say) from the greater (A, say) till the difference is less than B amount to exactly the same thing. If you subtract repeatedly B from A, when you will stop what you are left with is the remainder (and the number of subtractions is the quotient). Goochelaar (talk) 23:48, 23 January 2011 (UTC)

Agreed, I think . . .. What I've (come to) mean is, given that we like Euclid are restricted to subtraction (no division), we can still construct the algorithm in more than one way. The "ways" aren't terribly different, but they will yield significantly-different flowcharts. They're interesting because they (may, I'm not sure yet) illustrate Knuth's concern about "elegance" (i.e. speed of execution, less use of resources). For instance, we can do the algorithm per the flow chart with two subtraction-loops (B <-- B - A, and A <-- A -B) -- this one looks elegant. Or we can use a second more-obvious version that has a single "R <-- M - S" loop where R = remainder, M = minuend, S = subtrahend. This second version requires the A > B test (at the beginning) to put the larger number into minuend M and the smaller into subtrahend S. And, and then, before another residue-loop, this second version requires us to put S into M and R into S. I haven't tested this version yet. My guess is it takes more instructions, and it does require an additional register/memory R. I'm actively working on all this, trying to hone the diagrams and commentary. But it's taking longer than I would have supposed. First I had to understand exactly what the flowchart version was doing -- it wasn't crystal-clear (it's trickiness is a sign of elegance, perhaps). BillWvbailey (talk)

The diagram of Euclid's algorithm from T.L. Heath 1908 with more detail added. Euclid does not go beyond a third measuring, but Heath's discussion at the end of Proposition 2 cites Nicomachus's similar "rule" but where Nichmachus makes it clear that the process of "taking the remainder as many times as possible from the less of the original numbers; and so on; and this process will end with "an unit" or "at some one and the same number". Nicomachus gives the example of 49 and 21: "I subtract the less from the greater; 28 is left; then again I subtract from this the same 21 (for this is possible); 7 is left; I subtact this from 21, 14 is left; from which I again subtract 7 (for this is possible); 7 will be left, but 7 cannot be subtracted from 7." Heath comments that "The last phrase is curious, but the meaning of it is obvious enough, as also the meaning of the phrase about ending "at one and the same number"(Heath 1908:300).
Euclid's algorithm from T.L. Heath 1908 The Thirteen Books of Euclid's Elements Translated from the Text of Heiberg with Introduction and Commentary: Volume II Books III-IX, Cambridge: at the University Press. A "number is a multitude composed of units", i.e. a whole number. And to "measure" is to a shorter length successively along a longer until a remainder is less than the shorter length. A "measure" may or may not leave a remainder.

Rather than clutter up the talk page I've moved the text that was here to a sandbox page in my name space User:Wvbailey/Euclid's algorithm. It's undergone a host of revisions, and it's in a massive state of flux. Anyone who wants to take a peak please do and leave comments on the talk page. To me the most interesting aspect is the notion that, given the same instruction set, a function can have two or more quite-different algorithms/programs (cf Rogers 1987:1-2). This leads to Chaitin's notion of "elegance" (the shortest algorithm/program), and Knuth's notions of "goodness" (e.g. fastest, uses the least resources). Also structured progamming, and Knuth's induction-method to verify a program. BillWvbailey (talk) 16:07, 28 January 2011 (UTC)

My edits

The article is rotten from the very beginning, while you are discussing some secondary trivia. If you are going to revert me again, be sure to prove that the nonsense I removed from the very visible, introductory part, will not reappear without proof that it was correct and I am a moron. Loggerjack (talk) 17:12, 25 January 2011 (UTC)

What you wrote seems to confuse the notion of algorithm with the execution of the algorithm. The execution may end up an infinite sequence of (executed) steps, but the algorithm itself (its number of lines of code, for example) is finite in extent, always. A "bug" is not an algorithm, it is ineffective code. For example Knuth 1973:2 (Volume I) states: "5) Effectiveness. An an algorithm is also generally expected to be effective. This means that all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exaclty and in a finite length of time by a man using pencil and paper." This is certainly not the description of "buggy code" caught in an infinite loop. And to my knowledge there is no such thing as an infinite instruction set, can you cite this statement? BillWvbailey (talk) 20:47, 25 January 2011 (UTC)
(1)You are right. I reverted myself. However in my defense I must say the old intro already incorporated this confusion: "sequence of steps" is actually about execution. You may see this right from the top picture: it is not a sequence, but a tree. I am glad that You brought up this issue. Please review the article to clarify this. Loggerjack (talk) 22:45, 25 January 2011 (UTC)
(2) Disagreed with your: A "bug" is not an algorithm, it is ineffective code.. A "bug" may be as early as in the design, i.e., in the algorithm itself, not only in implementation. I do agree that I gave a bad/confusing wikilink to "software bug." Loggerjack (talk) 22:45, 25 January 2011 (UTC)
(3) "Effective": Disagreed about the finite number of steps. The major confusion is that in your definition is supposed to produce some "end result". This is not always so. I gave an example of traffic light. (A more "folklorish" example would be elevator controller [what??? no wikipedia article???]). There is no ultimate "end" result, and they are supposed to work "forever", although the whole algorithm for these controllers may be arguably split into a sequence of "end results".
(4) I see a deep logical issue in bridging the gap between the phrases "the instructions do not necessarily constitute a finite sequence" and "proceeds through a well-defined series of successive states, eventually terminating in a final ending state." 22:45, 25 January 2011 (UTC)
(5) I may continue this list, but I urge you to re-read the fundamental parts of the text carefully. Loggerjack (talk) 22:45, 25 January 2011 (UTC)
Take a look at Algorithm characterizations. The generalized notion of "algorithm" is like the notion of pornography -- you know what it is when you see it, but you can't quite seem to define it. So you make a list of what more or less is an example of it or what you think it is; somewhere in the composite is the truth. See in particular Knuth's full characterization, and Rogers 1967, Stone's, Berlinski's . . . they're all interesting. And Gurevich believes that the machine+code is actually "the algorithm".Bill Wvbailey (talk) 23:02, 25 January 2011 (UTC)
This makes sense. And I am sure that someone had already written something to this end. Why don't you write this in the introduction? It would be much better than some random attempts to define what is not definable. Loggerjack (talk) 23:10, 25 January 2011 (UTC)

I stuck a stake in the ground. I hope by hardening the definition, confusion will abate. The lead paragraphs are now full of footnotes (maybe not good style but so be it) because there is and will be dissent and at least we now can see the sources. The very last paragraph I don't like, but it has a source; an evolution of the notion of "algorithm" to include analog computation is probably a bad thing; historically algorithms have been, by their very nature, discrete, i.e. symbolic and mappable to integers. Also, the colored drawing was not (but this is arguable, for sure) really describing an algorithm; rather it was describing a process or method -- Knuth notes (page 2), after introducing Euclid's algorithm and discussing it, that "The modern meaning for algorithm is quite similar to that of recipe, process, method, technique, procedure, routine, except that the word 'algorithm' connotes something just a little different. Besides merely a finite set of rules which gives a sequence of operations for solving a specific type of problem, an algorithm has five important features: 1) Finiteness . . . 2) Definiteness . . . 3) Input . . . 4) Output . . . 5) Effectiveness . . .. Under 'Input, he states "quantities", which usually (altho a cup of sugar is a quantity) imply numbers. Rogers is absolutely adamant -- symbolic input -->algorithm symbolic output, and he limits himself to numerical functions stating "this limitation (to numerical functions) results in no loss of generality". Bill Wvbailey (talk) 16:35, 26 January 2011 (UTC)

User Optimering, Metaheuristics, and Template:Optimization algorithms

Please view the promotion of (meta)heuristics by editor Optimering over the last year, including his edit at the Template:Optimization algorithms, where he removed approximation algorithm and added ant colony optimization from the section on combinatorial optimization. He also removed the Gauss-Newton and Levenberg-Marquardt articles from the gradient-related section; these are the most used methods in all of optimization, according to Lemaréchal, Gilbert, Bonnans, and Sagazstibal (sic) and science citation index counts.

"Optimering" has already had one "heads-up" at the COI noticeboard. He has been warned about OR and self promotion (at risk of blocking) at his self-titled discussion "Block_threat_to_expert_contributor" at the administrators' noticeboard. and has boasted about his own standing in the world of metaheurstics at AFD.

I'll post this also at the CS page. Thanks! Sincerely,  Kiefer.Wolfowitz  (Discussion) 15:46, 19 February 2011 (UTC)

" . . . or achieving a desired result"

I reverted this because of my stance around "stiffening" (restricting) the definition of "algorithm". The notion of "function" implies a "desired result" (the output). And an algorithm implies the way to get the function: "step-by-step", "terminating", "computating/calculating", with no other possibility allowed. For example, a "cookbook", or a specification, is not an algorithm. In general an algorithm must be translatable into the "language of", and computable by, a target "computer/computor", that is, when reduced its lowest level, a Turing machine equivalent mechanism or a human acting like a robot. (Since I'm far away from my books and isolated wrt the internet, my response to any dialog here will be sporadic at best). Bill Wvbailey (talk) 02:05, 13 March 2011 (UTC)

Enhancing the lead-paragraph

Tijfo098 replaced the drive-by fruiting tag with a more-reasonable one, i.e. that the lead-para should include a more-complete summary of the article than it does now.

So what do we want to say in brief about "algorithm(s)"? Why are they important? Why do we care about the notion? This we can say about algorithms:

  1. That they common in daily life. They are "executed" by both humans and mechanisms (computers, automata),
  2. when executed by humans, they are most commonly to be found in the procedures of basic arithmetic, e.g. customary methods of writing digits under one another, the carry and borrow of addition and subtraction and the shifts with carry or borrow of multiplication and division [etc]; however, as the Euclidean algorithm demonstrates, these algorithms may be complex, and/or they may involve sophisticated mathematical operations (e.g. finding the roots of a quadratic equation, finding a derivative in the calculus),
  3. when executed by machines, nowadays they are most commonly instantiated as core elements of "programs" that are located in the memories of computers, but also . . .
  4. They also appear in simple automata e.g. state machine-driven mechanisms: mechanisms that proceed through the equivalent of an instruction-table together with testing of inputs and "jumping" between instructions, for example simple solid-state controls for traffic-light controllers, or mechanically-derived stepping switches for washing machines, dishwashers, etc.

What else? Bill Wvbailey (talk) 20:06, 7 April 2011 (UTC)

---

The following occurred to me -- answer the Five Ws: to "answer a checklist of six questions, each of which comprises an interrogative word". The following is derived from Boethius.

As can be seen below, this turns out to be a non-trivial exercise:

1 What?
[derived from various sources: Rogers, Knuth, Stone etc]: An algorithm is an effective step-by-step method for calculating a function (i.e. given an input perhaps null, determining the output mapped to that input),
2 How?
[Derived from Stone, Turing's analysis, etc]: A mechanism (human or mechanical) proceeds through the algorithm -- the finite list of instructions that specify, as determined by the present instruction and the input condition(s) (in turn derived from either external input or a memory or both), the next instruction plus the "output" associated with the current instruction.
3 Why?
Why algorithm (with input)? [Answered by the Boolos-Burgess quote, by Stone, by Chaitin, by etc ]: To generalize, to compact. Rather than specify a "function" in tabular form, i.e. for every input write down its output (perhaps ad infinitum), we use a much abbreviated step-by-step effective method to achieve the same result, together with an effective calculating/computing "agent".
Why algorithm (with null input)? [There may be reference/quote from Brouwer, believe it or not]: So without the onerous task of listing the decimal digits perhaps ad infinitum, we can find/define (if anyone wants to know them to any precision they specify) the value(s) of certain special numbers that happen to be calculable with simple arithmetic, specifically the transcendental numbers pi, e, plus a few more.
4 Where?
Where do algorithms appear? [Derived from Turing's analysis]: As instructions on paper (or equivalent). In human memory (i.e. learned). In computer memory. In state tables (or the equivalent, i.e. built-in combinatorial logic) of automata or (arguably) biologic systems (e.g. reflexes)
5 When?
When are algorithms used? [Derived from Kleene 1952, his analysis of 3-value logic, believe it or not]: In those circumstances when the inputs are not known, but an effective method must be available to compute the outputs when the inputs arrive,
When developed? [Many sources] From the ancients to modern day, see history
When was the notion (quasi-)formalized? [Many sources] See history: 1920-1930, necessary for an answer to the Entscheidungsproblem
6 With what?
Humans do algorithms with what? [derived from Turing's analysis]: Human with reading/writing ability etc + pencil and paper (erasable and writable) + written instructions, OR human with reading/writing ability etc + pencil and paper or memory of learned abilities (e.g. ability to do long division) memory, etc + memory serving as a "scratch-pad" for temporary results,
Automata do algorithms with what? the tabular instructions (next-instruction + output) are in physical configurations stored electronically, electrically, mechanically, pneumatically, etc + modifiable "state memory" stores the next instruction,
Computers do algorithms with what? -- the tabular instructions are located in physical configurations readable only (usually electro-physical/chemical) or readable-writeable (usually electronic, sometimes electrophysical e.g. electron-storage)) + the current state is stored in a read-wrte state memory (usually electronic) + a read/write storage (usually electronic) is available (and behaves like a scratch-pad).

Wvbailey (talk) 22:53, 7 April 2011 (UTC) Bill

I'm not sure such a major overhaul of the lead is called for (and much emphasis on algorithms for execution of everyday tasks by humans and non-computational machines does not seem to correspond to the actual importance of the notion), but I do have problems with the current lead. For the sake of clarity I'll start citing it
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning.
Starting from an initial state and initial input (perhaps null), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
A partial formalization of the concept began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939.
I have not much difficulty with the last paragraph, except that it does not belong to the lead, but to the formalization section. My main difficulty with the other two paragraphs is that they do not give any clear description, and are contradictory in some respects. Indeed saying "the transition from one state to the next is not necessarily deterministic" immediately contradicts "well-defined" in the previous sentence, unless I am to read it as: "proceed through a finite but possibly unspecified sequence of successive well-defined states", which I doubt. Also calling the random element in randomized algorithms "inputs" goes directly against the intention (if they are just inputs, why consider them "random"; few would say that a sorting algorithm incorporates random input, although from the point of view of the algorithm the input could be anything). It is stated somewhat implicitly that all inputs are available initially, and that outputs are produced at the end, which is not applicable to, say, an elevator control algorithm (but then it also fails the termination criterion, and is somewhat hard to view as "calculating a function", so this is maybe intentional).
I think these problems are inevitable, in the sense that people do not agree on what is to be called an algorithm, for understandable reasons. Those who take algorithms as subject matter have interest in narrowing down the notion to something that is manageable; the citation provided to justify the "calculating a function" point of view clearly falls under this case (and it is somewhat at odds with the fact that many algorithms produce output "on the fly" while running without recording it themselves, making it hard to view the output as a single composite value). Others have interest in widening the definition so as to cover a notion they take particular interest in (the current formulation has clearly underwent some pressure from "randomized algorithm" folks; the "non-deterministic algorithm" point of view is less clearly served, and there are certainly numerous other points of view excluded by the narrowness of the current formulation). To solve this dilemma, I propose to stick initially to a simple, fairly narrow description that has the benefit of clarity, and then explicitly indicate that the notion can be widened in various ways to cover variants of practical importance, giving some examples. (Just as an indication, Knuth's description would seem a reasonable basic definition, with inputs at the start, outputs at the end, deterministic steps, and guaranteed termination.) I think this is preferable to trying to give a formulation that incorporates every possible variation from the start, because this is bound to given a very vaguely defined notion that serves nobody, and certainly not the reader who tries to get some idea of what "algorithm" means. Marc van Leeuwen (talk) 11:13, 19 April 2011 (UTC)

Mark, I agree that the third paragraph could, with no harm to the article, be put into the formalization section.

“Pressure from the randomized algorithm folks": yes, this is why that part about non-deterministic is still there, as not to rile the waters too much if it were to be removed. I don't like it for the simple reason that it dilutes the notion of an algorithm being deterministic. I agree it should be removed from the lead. (It needs a footnote/source, and it could be expanded into an entry at Algorithm characterizations). This leaves just:.

In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning.
Starting from an initial state and initial input (perhaps null), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

Notion of "computing a function" : I think this is why an unregistered editor hung the "driveby banner" over the article. The "function" notion comes from Rogers' book on recursion. While it (perhaps) represents a narrow definition he states that "For our purposes . . . this limitation (to numerical functions) results in no loss of generality" (Rogers 1987:1). True, his stated purpose is to characterize recursive function. But when I read it I thought: "Wow, that really nails what an algorithm does/is".

But because it is so precise, a lot is unsaid. Plus it forces a "newbie" reader to confront the murky notion of "function". It also forces, via the notion of "function", the notion of "mapping" or "transduction" i.e. converting the "real-world input" (e.g. spoken words, typed words, whatever . . . such as "YES", "NO", "DO AGAIN", "THIRTY_THREE", "ELEVEN") into positive-integer equivalents suitable for the agent that will "execute" the algorithm (list of instructions). And it forces the notion of "instantiation" of an algorithm into a "Finite, discrete, numbered list of instructions".

In my estimation the Boolos-Burgess quote in the informal definition section likewise nails the reason why algorithms? -- i.e. to compact or reduce into a finite "structure" a number (such as pi) or a procedure, rather than have to list every single value (perhaps an infinite number of them) in a lookup table. I'd say the classical example is an algorithm that allows us to compute the decimal digits of pi rather than list all the digits, ad infinitum. In fact, Rogers 1987:4 lists pi as an example of “algorithm”, here's a paraphrase of his 4 examples:

a. the xth prime number
b. the greatest common divisor of x and y (The Euclidean algorithm serves here)
c. the integer <, = 9 whose arabic numeral occurs as the xth digit in the decimal expansion of pi
d. x + y. "Such common algorithms are the substance of elementary school arithmetic."

Here’s a couple attempts to put the notion of an algorithm into a “function box”. Whether or not such drawings help or hinder probably depends on the individual reader. I tested the simpler version (below) on a non-mathematician and encountered the problem of what an integer is. But once I simplified the notion (i.e. listing them) they got the idea: “counting numbers fall in, the gcd falls out”. The more complicated drawing is much closer to the concept of a “computer” with its “table of instructions” and its “effective agent”. I don’t think that in the lead the issue of the numbers appearing “all at once at the start” would be necessary to bring up. E.g. in these circumstances an effective algorithm would produce its output as the input becomes available (it could distinguish between null input and 0-as-valid-input, for instance). However, later in the article this should be discussed (if it isn’t already). Bill Wvbailey (talk) 20:53, 19 April 2011 (UTC)

An algorithm is shown here inside a function-box. The algorithm includes a list of instructions: finite, discrete, and numbered (or otherwise individually identifiable). But the notion also requires the instructions to be effective, i.e. instantiated in a form suitable for its "agent" (human or mechanical). This "agent" will interact with the instructions and the computation as it proceeds. When provided with effective "input" (e.g. symbols or mechanical events), the device that contains the algorithm and effective agent will (1) map the input into positive integers, (2) proceed step by step to compute the correct "output" and perhaps (3) map it into a symbol or a mechanical event. Usually an algorithm requires a proof of correctness (cf Rogers 1987:1-5).
As shown in this example, "Euclid's algorithm" (appearing inside a so-called "function-box") can be thought of as a "procedure for calculating a function (with respect to some chosen notation for integers)" (Rogers 1987:1). This procedure must appear as a finite, discrete list of individually-identifiable instructions together with a computational "agent" (human or mechanical) that will be effective (i.e. capable) with respect to the instructions, the input and the output (cf Rogers 1987:1-5).

Finite number of steps?

I never understood why algorithms are supposed to be limited to a finite number of steps. An algorithm to calculate the square root of a number requires an infinite number of steps. But "infinite" in mathematics does not mean a prohibitively large number, but only that the result can be achieved with arbitrary (although never complete) precision. Rbakels (talk) 09:18, 6 December 2011 (UTC)

Well the algorithm on computer would be to "calculate the square root to within 1 ULP" rather than just calculate the square root. --Salix (talk): 10:40, 6 December 2011 (UTC)

About the etymology section

usually this section precedes all others. Why then is it postponed till section 9 where it makes no sense at all? Is it maybe because we have a hard time accepting that such a crucial concept, without which we wouldn't have had computers, actually goes back to and bears the name of a brilliant Muslim, who we would prefer to see as a terrorist? Shame on you! — Preceding unsigned comment added by 87.212.38.229 (talk) 16:58, 29 December 2011 (UTC)

I tried to do something about this but apparently user Wvbailey is convinced that the authors he mentions (is it Sonya Kleene, Stephen Kleene, who?, no idea who is meant by "Rogers") are somehow definitive. I don't get into edit warring, disputes with such users so others will have to pick up from here. Maybe the tagging is the best that can be done under these circumstances. Lycurgus (talk) 02:17, 17 January 2012 (UTC)
Also, I would take the approach of defending Arabic or Iraqi culture in this case rather than Islam/Muslims, they're not the same. 72.228.177.92 (talk) 02:43, 17 January 2012 (UTC)

---

The etymology has been an ongoing issue here. Clearly anonymous has a (Muslim, Arab or Persian) ax to grind and is attempting to inject their religious faith or jingoistic intent into a difficult article that needs (quite clearly as demonstrated in anonymous's rant above) extremely tight sourcing. This problem of edit warring over their favorite son Al Kwarizmi has been ongoing between Iranians and Moslems, Shiites and Sunnis: the Iranian Shiites want to claim Al Kwarizmi as an Iranian (he was a Persian), the Arabs who swept through his region of Persia and converted his family by the sword from Zoroastrianism to Islam want to claim him as a Moslem and/or Arab. What his faith was, and his nationality, is absolutely irrelevant to the definition of "algorithm". The edit that you redacted and I reverted is flat out wrong. If anonymous can produce sourcing, then we can entertain a dialog. Or better yet, anonymous can add the definition and sourcing to Algorithm characterizations. But first I suggest that anonymous actually read the article, the footnotes and consult the relevant attached sources. And by the way it's Stephen Kleene 1952, Hartley Rogers 1967, and add Donald Knuth various dates); see the sourcing references. wvbaileyWvbailey (talk) 02:48, 17 January 2012 (UTC)
tldr; I should think Knuth or somebody would be a name to drop if that was the approach being taken. 72.228.177.92 (talk) 03:26, 17 January 2012 (UTC)
I'd also like a bit of clarity on the issue. While the article implies that the word algorithm is taken from Al Kwarizmi's name, the concept of an algorithm had been around for several centuries already. If that is true, then that perhaps needs to be made more explicit. Kmasters0 (talk) 08:37, 22 February 2012 (UTC)
Can you source this or be more explicit (who, what, when, where, how) about someone actually intentionally creating a generalization of the notion "step-by-step procedure for how to perform calculations" (i.e. concept)? If there's something that can be sourced and is relevant we should add it. It would be an interesting addition. (The history indicates it was a long process that culminated only in the 1930's, altho there were major strides (not mentioned in the article, I believe) in the mid-late 1880's around a formalization of the notion of recursion, in fact the more I think about it, the more "key" this seems; I bumped into this accidentally not so long ago but I can't remember who was responsible for formalizing the notion.). Bill Wvbailey (talk) 15:05, 22 February 2012 (UTC)
RE recursion: Bill's note to Bill: See Dedekind 1887. But what was I reading that sent me there? Cf Dedekind's definition 71 of the natural numbers, then his theorem 80 complete induction, of mathematical induction §124-125. Also Berlinski (unfortunately he does not give a history of recursion): "Recursion is an example of a mechanical process. The steps are broken down so simply that no though is involved in carrying them out. . .."/"And the importance of this?"/"It gives prceise meaning to the intuitive idea of effective calculability."/"Meaning?"/The dialog goes in a bit of a circle]/"It was clear to me," the cardinal finally said, "that recursion would be a subject to my taste" (pages 134-135)
RE recursion: Note to Bill part 2: Here's Dedekind 1887 as translated by Beman 1901: " . . . therefore the definition by induction (or recursion) is determinate and consistent (126)) (page 33). Bill Wvbailey (talk) 23:40, 24 February 2012 (UTC)
No, I can't source it, which is why there is confusion. My argument is that the article implies this. It does this by introducing Euclid’s algorithm as a matter of fact. Because it doesn't say that the process did not exist, this would imply that the concept of an algorithm had existed at (or even before) Euclid's time. Then the article goes on, much later, to explain that the word for the process comes from Kwarizmi's name. If that is not true, then I think it should be made more explicit. (BTW, while I don't support anonymous's rant, I also found it strange that the etymology of the word was placed so late in the article. Perhaps this can be moved, but by one who is more familiar with the wikipedia's editing principles than I.) Kmasters0 (talk) 10:01, 24 February 2012 (UTC)
At the time anonymous raised the complaint, the etymology of the word was not mentioned in the lead section at all. Now it is in the first sentence. Since then it seems that the etymology discussion has cooled off a bit. Isheden (talk) 15:24, 24 February 2012 (UTC)

---

  • RE Euclidean "algorithm": Good point about the usage "Euclidean algorithm". I understand your concern. I think the approach to take (see the quotes below) will be to change the wording under the drawing, and perhaps in the body too. Euclid (at least in Heath 1908 translation) begins with a Proposition" which he then proves. Heath in his commentary calls it a "method" and "process" (cf footnote page 67) and refers to Nichomachus' rule:
"Here we have the exact method of finding the gratest common measure given in the text-books of algebra . . .. The process of finding the greatest common measure is simply shown thus: [demo goes here, is confusing]. ¶ Nichomachus gives us the same rule (though withut proving it) . . ." (pages 66-67 from Euclid's Elements Book VII appearing Hawking 2005).
Knuth starts with a little history re al Kwarizmi, and then jumps right in with the Euclidean "algorithm" as his first example. Knuth notes that the word did not appear in Webster's New World Dictionary until 1957. He also cites the Oxford English Dictionary about the etymology being confused with arithmetic. He calls it a process once and thereafter "algorithm":
"By 1950, thw word algorithm was most frequenty associated with "Euclid's algorithm", a process for finding [etc]" (Vol 1 Knuth 1972:2)
Berlinski starts this way (first para in the Preface of the book):
"More than sixty years ago, mathematical logicians, by defining precisely the concept of an algorithm, gave content to the ancient human idea of an effective calculation." (Berlinski 2000:xi)
"The idea of algorithm had been resident in the consciousness of the world's mathematicians at least since the seventeenth century; and now, in the third decade of the twentieth century, an idea lacking precise explication was endowed with four different definitions . . . the four quite different definitions, it is worthwhile to recall, were provided by Goedel, Church, Turing, and Post." (Berlinski 2000:20)
Definition from my Merriam-Websters Ninth New Collegiate Dictionary (1990) gives a first occurrence in ca 1894:"a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly a step-by-step procedure for solving a problem or accomplishing some end."
  • RE The etymology section: it has moved around any number of times. As I note above, it has been a "bone of contention" for a long time, almost to the point of having to place the article under a "review" process. A prominent position near the top seems to inflame the passions of those who then read no further and then engage in an edit war. Since appearing now far down in the article I've noticed the flaring of passions has been greatly reduced. Actually, even the mention of his name in the lead paragraph has been a cause of inflammation. Bill Wvbailey (talk) 16:35, 24 February 2012 (UTC)

Etymology and True Word Origins

It is obvious the editor of this article has control issues.

It is a fact that al-Khwārizmī studied Greek source material extensively to make his unique contribution to the world of mathematics.

It is a fact that most of the word algorithm just as word the logarithm come from a Greek language origin.

To state that two syllables of a word come from a certain language (in this case Greek and NOT Arabic) and then refuse to give credit, shows true ignorance, ethnic bias and revisionism.


It is in fact important to begin to understand any concept by beginning with where the notion comes from and what it means, this is fundamental.

Since "al" is a reference to the name Khwārizmī and not part of the name or nomen itself, then there is no evidence that this is related to the etymology of the word algorithm and not irrelevant to the issue. The word 'the' or 'of' have no contextual or descriptive relationship to the entire word, al-gorithm.

It is widely accept that logarithm is short for the Greek logos-arthimos or literally translated "why-arithmetic" and most evidence supports the idea that algorithm is a further development of the same arithmetic concept allo-logos-arthimos or translated from the Greek: "another-why-arithmetic".

al-Khwārizmī developed the algorithm from reference to Greek math texts, not in a vacuum, but by study of preexisting knowledge and he innovated from that point. This has nothing to do with opinions it is "the story", historical fact. — Preceding unsigned comment added by 69.183.30.237 (talk) 06:41, 8 May 2012 (UTC)

You are certainly full of opinions (or as Bertrand Russell would call them -- "beliefs"), but you have not backed them up with even a single source. Wikipedia requires sources, not beliefs. If you have credible sources, in particular research on the etymology of this particular word algorithm, we would like to see them. Bill Wvbailey (talk) 14:40, 8 May 2012 (UTC)

There are sources in the article itself, an article which has contradictions. Some related articles that conflict are also found. The use of language fragments or combined words means that it is already proven. Arabic and Greek merging of words is common throughout history, there is a shared influence from Egypt and the Phoenician root. The close trading relationship between these cultures translated into intellectual exchange as well. The alphabet was created from a Phoenician arithmetic notation of items of trade, ie. Ox-head or alpha etc. The Greeks combined these symbols with there own glyph-ic scripts and writing was also adopted in more widely in the mid east. Arabic followed a similar genesis preceding written script.

The article itself is not consistent and was revised to remove pertinent information. I don't agree that any "belief" system enters into the debate. History at one point in time became an instrument to convey knowledge as imperfect as that can be. Politicizing something that can be clearly pointed out does not accomplish anything. It is not belief that leads me to think this, it is disdain for a new culture that has disregard for the past. — Preceding unsigned comment added by 69.182.147.217 (talk) 04:36, 11 May 2012 (UTC)

Etymology: algorithm. Sources? BillWvbailey (talk) 23:00, 11 May 2012 (UTC)
Personal tools
  • Create account
  • Log in
Namespaces

Variants
Actions
Navigation
Toolbox
Print/export