Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 12
Riemann and Spivak
How do you pronounce these names (IPA please; since you're mathematicians you might not be familiar with it, here's a handy chart WP:IPA)? I know the Riemann article has a pronounciation but it is the German pronounciation, I want to know how it would be pronounced in a university/classroom setting. As for Spivak, is it ['spiː.væk], or ['spiː.vak] or something else? —Preceding unsigned comment added by 24.92.78.167 (talk) 01:10, 12 November 2010 (UTC)
- I pronounce Riemann [ˈɹiːmɑːn], I think. I'm not too sure about the differences between those various A symbols in IPA. It's the same vowel I use in "father." (I speak with a Nebraska accent, though, which seems to have fewer differentiated vowels than some accents; for example, "cat" and "can't" have the same vowel for me, as do "caught" and "cot." But it's definitely not an [æ] or an [ə].) —Bkell (talk) 04:21, 12 November 2010 (UTC)
- You might try the language reference desk. —Anonymous DissidentTalk 09:44, 12 November 2010 (UTC)
- I say [ˈriːmən] and [ˈspɪvæk]. Algebraist 10:33, 12 November 2010 (UTC)
I've always prounounced Riemann with an "ah" sound in the unstressed second syllable, rather than a schwa, and the first syllable rhyming with "be" as in "To be or not to be....". Pretty much the German pronunciation except that in German one might pronounce the initial "R" differently, and that difference is not essential to either language. Michael Hardy (talk) 16:54, 12 November 2010 (UTC)
- I agree with Michael Hardy for Riemann, which is [ˈɹiːmaːn] as I understand IPA. I say [ˈspɪvæk] as Algebraist. I have not heard much variation on these from native speakers of English specializing in math at American universities. SemanticMantis (talk) 17:02, 12 November 2010 (UTC)
Mortality Rates India
Where can I obtain the mortality rates (or table) for India? —Preceding unsigned comment added by Aneelr (talk • contribs) 03:22, 12 November 2010 (UTC)
- http://www.unicef.org/infobycountry/india_statistics.html Ginger Conspiracy (talk) 23:47, 16 November 2010 (UTC)
Godel
If we consider Physics to be an axiomized system, what would the implications of Godel's incompleteness theorem be? 76.68.247.201 (talk) 06:14, 12 November 2010 (UTC)
- There's a story about Lincoln. Supposedly he once posed the question, "If we consider a tail to be a leg, how many legs has a dog?". The answer was "four", because considering a tail to be a leg does not make it one.
- Physics is not an axiomatic system. --Trovatore (talk) 06:16, 12 November 2010 (UTC)
- Didn't von Neumann make quantum mechanics axiomatic? 76.68.247.201 (talk) 06:52, 12 November 2010 (UTC)
- I don't know just what he axiomatized. But I'm quite certain it wasn't all of physics. I don't think they even knew about, say, gluons when von Neumann was alive. --Trovatore (talk) 10:10, 12 November 2010 (UTC)
- Didn't von Neumann make quantum mechanics axiomatic? 76.68.247.201 (talk) 06:52, 12 November 2010 (UTC)
- I think it is a good question, (meaning that I do not know the answer). Physics can be axiomized. A theory of everything is supposedly a set of axioms for all of physics. Newton's laws constitute axioms for celestial mechanics. Euclid's postulates constitute axioms for geometry, considered a physical theory of space. Bo Jacoby (talk) 09:39, 12 November 2010 (UTC).
- I agree with Trovatore. Physics is an empirical discipline. Not matter how elegant the underlying mathematics / axioms are, if the theory predicts one thing different from experiments, we must discard the theory. Money is tight (talk) 10:07, 12 November 2010 (UTC)
- I also agree with Troavtore. You can develop an axiomatic model of some part of the real world, but that is not the same as axiomatising physics, which is an empirical science. When observations or experimental results conflict with the deductions from your model, then it is time to find a new model. Newtonian gravity is a model that works pretty well, but does not completely account for the motion of Mercury; gravitational lenses show that space is not exactly Euclidean etc. Gandalf61 (talk) 10:59, 12 November 2010 (UTC)
- I agree with Trovatore. Physics is an empirical discipline. Not matter how elegant the underlying mathematics / axioms are, if the theory predicts one thing different from experiments, we must discard the theory. Money is tight (talk) 10:07, 12 November 2010 (UTC)
- Now, it should be said that some pretty respectable people (e.g. Hawking) have made the argument that the Goedel theorems exclude a theory of everything. I think it's a pretty silly claim myself, or rather, the sort of "theory of everything" that it excludes is the sort that no one should ever have thought might exist in the first place. For my money, a theory counts as a "theory of everything" in a physics context if it completely describes all elementary physical interactions. The possibility that you might be able to encode, in terms of those interactions, a mathematical question to which the theory might not give you an answer, is totally irrelevant.
- To put it another way, if the theory describes all physical reality once given an oracle for mathematical truth, that's more than enough to make it a physics "theory of everything". --Trovatore (talk) 10:16, 12 November 2010 (UTC)
Obviously Hawking does not agree with Trovatore that Physics is not an axiomatic system. Bo Jacoby (talk) 12:31, 12 November 2010 (UTC).
- What Hawking says in his lecture Gödel and the end of physics is:
"Up to now, most people have implicitly assumed that there is an ultimate theory that we will eventually discover. Indeed, I myself have suggested we might find it quite soon. However, M-theory has made me wonder if this is true. Maybe it is not possible to formulate the theory of the universe in a finite number of statements. This is very reminiscent of Godel's theorem. This says that any finite system of axioms is not sufficient to prove every result in mathematics."
- Gandalf61 (talk) 13:17, 12 November 2010 (UTC)
- Not to be an ass, but reality seems to have no probelm with Godel's theorems (I am being metaphorical, although there is merit to the statement about the universe computing...) More to the point though, theories in physics are not like theories in mathematics; Suppose we want to know if "x's can ever become y's", if our theory can't answer this, then we observe nature: if we find x's becoming y's, we add it to our theory; if we don't see this, then either some x will become a y at some future point, or not. If one will, eventually, then the theory with "x's can become y's, sometimes." will be stronger, thus, making our orginial not a theory of everything. On the other hand, if it never happens, then our theory isn't a theory of everything, since adding "no x's become y's" is stronger. The real question is if the list of all physically relevant such statements is infinite, or not. Finally, on an aside, if I have a theory in which X is undecidable, but X is not a physically verifiable part of the theory, does this mean anything physically? And, are we talking about theories with physical axioms, or mathematical models of them (which to me seem to be more akin to the inner(hidden) workings of a computer program, not mathematical theories, since they are built to calculate results.) —Preceding unsigned comment added by 71.61.7.220 (talk) 19:20, 12 November 2010 (UTC)
Okay, I`m not sure people are understanding my question, although I could just be misunderstanding your posts. First, I don`t see why you couldn`t axiomatize physics. Obviously physics is based on empirical results, but these empirical results would ultimately form the basis of the axioms which govern the mathematics of physics, no? At any rate, as I mentioned earlier, von Neuman was able to make axioms which could completely describe quantum mechanics, and I don't see why I would be unreasonable to suppose that quantum field theory + quantum gravity (or whatever is the eventual Theory of Everything) could be treated similarly.
So if physics is eventually axiomatized, what would it mean to say that not every result can be proved by the axioms? Would that mean that for such results we could defer to experiment, or would it be even outside the range of experimental testability? 76.68.247.201 (talk) 20:50, 12 November 2010 (UTC)
- Put it this way. Take any collection of silicon atoms, at least four, that can be divided into two pieces with an equal number of atoms. Can it necessarily be divided into two groups, neither of which can be arranged into a rectangular grid (no matter the spacing between the grid points, to avoid questions about gravitation and such) in which both sides have at least two atoms?
- This is the Goldbach conjecture (which is true, of course, but no one knows a proof). If you think it's legitimately a question about silicon atoms, which a true theory of everything ought to answer, then you have a case that the Goedel theorems have something to say about theories of everything.
- But I think that's nonsense; it's not a question about silicon atoms at all. You could know everything about how silicon atoms interact, and still not know how to answer it, because it's really a mathematical question, not a physical one. --Trovatore (talk) 21:12, 12 November 2010 (UTC)
- I'm sorry, I don't follow. If an axiom of physics is that counting objects can always be represented by natural numbers, then doesn't Goldbach's conjecture (assuming it's true) follow directly? 76.68.247.201 (talk) 21:53, 12 November 2010 (UTC)
This doesn't seem terribly complex to me... perhaps I'm oversimplifying. Accepted physical theories currently take the form of mathematical models (as opposed to using, say, metaphysical constructs). Mathematical models by definition are axiomatizable with rigid rules of inference. This is not always done for physical theories, but it must in principle be possible, otherwise math simply doesn't apply. One key is that your rules of inference must be strictly specified: how do you translate from experiments to proofs? If your theory's experimental proof axioms are strong enough, they may not be recursively enumerable and the incompleteness theorem may not apply. If the theorem does apply, the experimental proof axioms won't prove everything, so that for a given true statement, sometimes you won't be able to prove it by experiment. As a trivial example, I could take the axioms "if ice cream exists, then the Goldbach conjecture is true" and "any human can judge the existence of physical objects", in which case this theory proves the Goldbach conjecture since I say ice cream exists. (More axioms are needed and this system is almost certainly inconsistent, but that's not the point.) A complete theory of everything would be a mathematical model for every interaction in the observable universe, together with rules of inference governing the correctness of experimental proofs. Depending on if these rules of inference meet Godel's requirements, you may not be able to experimentally prove some statements--or the (assumed correct) theory of everything, and hence the universe, is fundamentally inconsistent. 67.158.43.41 (talk) 23:33, 12 November 2010 (UTC)
- I'm sorry, I can't make enough sense of this to respond to it in detail. I'll just call out a few points:
- This is not always done for physical theories, but it must in principle be possible, otherwise math simply doesn't apply.
- That's an extraordinary statement. Can you back it up in any way? I'm afraid I think you're simply wrong here.
- The rest of it uses a lot of words in ways that do not appear to be their standard meanings. It is not clear at all, for example, what an "experimental proof" is, or its rules of inference. --Trovatore (talk) 08:06, 13 November 2010 (UTC)
- By "experimental proof", I meant taking the results of physically performing an experiment and turning them into a formal proof of a mathematical result. For instance, take a loop of string and make it approximately into a circle. Measure the length of the string and the radius of the circle. With the right axioms, this experiment rigorously proves bounds on the value of pi. The application of axioms to physical results which proves these bounds is what I meant by "experimental proof". I'm sorry I wasn't clearer. My point is that experimental results (even hypothetical experiments) must be interpreted in a formal manner to be a rigorous proof of any mathematical result. The rules of the translation from experiment to proof govern what is experimentally provable, even in principle, in a given physical theory.
- I'm not sure what you mean by "backing up" the statement you quoted. In what way is it incorrect? Perhaps our disagreement lies in the definitions of "math", "physical theory", and "theory of everything" which are admittedly quite vague.
- Overall, my point was to address the original question, "If we consider Physics to be an axiomized system, what would the implications of Godel's incompleteness theorem be?" by saying "if your axiomatized system's experiments can be translated to mathematical statements using axioms to which the incompleteness theorem applies, some statements won't be experimentally provable." I also believe the "if" in the original question is unnecessary in that physics must be in principle an axiomatized system, though it's interesting that this part of the discussion wasn't part of the original question. 67.158.43.41 (talk) 01:46, 14 November 2010 (UTC)
- None of the above makes enough sense for me to address it. I think you have some very fundamental misconceptions about the nature of proof and formal systems. As to the claim that "physics must be in principle an axiomatized system", I think you're just wrong. --Trovatore (talk) 02:50, 14 November 2010 (UTC)
- You aren't specific enough for me to get anything out of this comment. For instance, you simply repeat "I think you're just wrong". Ah well, I suppose; it's not worth arguing, to me. 67.158.43.41 (talk) 19:53, 14 November 2010 (UTC)
- You made the original statement, so the burden is on you. You haven't supported it in any way. For that matter you haven't made it clear what you even mean. --Trovatore (talk) 20:12, 14 November 2010 (UTC)
- You aren't specific enough for me to get anything out of this comment. For instance, you simply repeat "I think you're just wrong". Ah well, I suppose; it's not worth arguing, to me. 67.158.43.41 (talk) 19:53, 14 November 2010 (UTC)
- None of the above makes enough sense for me to address it. I think you have some very fundamental misconceptions about the nature of proof and formal systems. As to the claim that "physics must be in principle an axiomatized system", I think you're just wrong. --Trovatore (talk) 02:50, 14 November 2010 (UTC)
Does the finiteness of the universe come into play? All the examples in List of statements undecidable in ZFC seem to involve infinite sets. If we restrict ourselves to a universe which only has a finite number of particles, in a finite volume which has only existed for a finite time might not all these problems simply go away?--Salix (talk): 07:53, 13 November 2010 (UTC)
- Well, first of all, it is not known whether the universe is finite or infinite. However even if it is finite, while that would (probably) mean that in principle there are only finitely many statements that can be coded up in the physical universe we need to be more specific about what sort of coding we have in mind! I'm uncomfortable putting things this way but let's leave that aside for the moment, and that therefore some theoretical being living outside the universe could in principle make a list of all the ones that are true and axiomatize that list finitely (the axioms could simply be the list itself), there is no reason to think that anything like this could be done inside the physical universe.
- I want to be clear that I am not saying there is a correct theory of everything. I am saying only that the Goedel theorems per se have very little to say about the question. --Trovatore (talk) 08:06, 13 November 2010 (UTC)
The OP did not require 'physics' to be some theory of everything, but perhaps rather any axiomatic theory having a physical interpretation, such as Hilbert's axioms for Euclidean geometry. It is interesting to know if there are undecidable propositions in euclidean geometry. Bo Jacoby (talk) 12:58, 13 November 2010 (UTC).
- The problem with all of this is that the "axioms" of a physical theory seem to be more about a correspondance between physics and mathematics. For example, saying quantum systems can be modeled by hilbert spaces is not a mathematical axiom. If something in the mathematical model's theory, say the theory of hilbert spaces, is undecidable, that doesn't meant the physics is. If an experiment did determine something that mathematics couldnt, it would just mean that the physics doesn't perfectly correspond with the mathematics. 24.3.88.182 (talk) 18:38, 13 November 2010 (UTC)
- I agree. Nothing says a given physical theory's axioms cannot be stronger than some arbitrary set of mathematical axioms. If my physical theory is inconsistent, it's stronger or equivalent to every mathematical theory trivially. 67.158.43.41 (talk) 01:54, 14 November 2010 (UTC)
- The problem with all of this is that the "axioms" of a physical theory seem to be more about a correspondance between physics and mathematics. For example, saying quantum systems can be modeled by hilbert spaces is not a mathematical axiom. If something in the mathematical model's theory, say the theory of hilbert spaces, is undecidable, that doesn't meant the physics is. If an experiment did determine something that mathematics couldnt, it would just mean that the physics doesn't perfectly correspond with the mathematics. 24.3.88.182 (talk) 18:38, 13 November 2010 (UTC)
Experimental outcomes are not necessarily computable using the postulates of the theory. A well known result is the following. In a universe exactly described by classical mechanics, a computer can be constructed such that executing a clock cycle takes half the time it took the execute the previous clock cycle. Memory space can also be expanded by a factor of two in each cycle. This means that in such a universe, an infinite number of computations can be performed in a finite amount of time. You can then e.g. verify the Riemann hypothesis by brute force verification that all the nontrivial roots are on the real line, regardless of whether or not a proof of this fact exists. Count Iblis (talk) 15:58, 14 November 2010 (UTC)
- I would suggest the experimental outcome you describe would in fact be a rigorous proof of the Riemann hypothesis. The axioms applied in the proof would be those of classical mechanics and a correspondence between physical results and the complex numbers. Ignoring axioms translating from physical results to complex numbers (i.e. limiting yourself to pure mathematical reasoning), there may not be a proof, but that's simply because the resulting system would be weaker. 67.158.43.41 (talk) 20:06, 14 November 2010 (UTC)
- OK, maybe here's an opening to nail something down and have a discussion with actual content. You do understand that, while people have made various extensions to other things, the Gödel theorems per se apply only to first-order logic? As an extreme example, the original (second-order) Peano axioms are categorical in second-order logic. All models of them (in the sense of full second-order logic) are isomorphic, so they completely determine arithmetical truth (and not just to first order!), and are presumably consistent.
- The "proof" you are talking about above does not correspond to a proof in first-order logic. It could maybe be formalized in infinitary logic if you allow the omega-rule. But the Gödel theorems are then not applicable, or at least not in their usual form. --Trovatore (talk) 20:42, 14 November 2010 (UTC)
- I never said the Godel theorems were necessarily applicable in the above example. I've been careful not to specify the rules of inference or types of axioms allowed. I believe you've missed almost every point I've made and have no desire to argue with you, so I will stop responding to this thread. 67.158.43.41 (talk) 04:28, 15 November 2010 (UTC)
- In other words, you have refused to say what you mean. Please refrain from making contributions where you are not willing to say what you mean. --Trovatore (talk) 04:44, 15 November 2010 (UTC)
- I never said the Godel theorems were necessarily applicable in the above example. I've been careful not to specify the rules of inference or types of axioms allowed. I believe you've missed almost every point I've made and have no desire to argue with you, so I will stop responding to this thread. 67.158.43.41 (talk) 04:28, 15 November 2010 (UTC)
- Are we talking about mathematics or physics? Something like "Quantum mechanical systems can be modeled using Hilpert spaces in such and such a way." is not an axiom of mathematics, nor is it mathematical. In short, the types of "axioms" that define the correspondance between mathematics and physics are not mathematical axioms, thus, Godels theorem doesn't really apply to them.66.202.66.78 (talk) 07:01, 15 November 2010 (UTC)
Polynomials given by a recurrence relation
Let Wk(x) be a monic polynomial of the degree k satisfying the follwoing recurrence relation:
Wk+1(x) = x Wk(x) +(a k2 + b k + c) Wk-1(x) ,
where a, b, and c are real numbers.
My question is: what is known about such polynomials (name, generating function, integral representation, roots, differential equations, etc.)? Obviously, those are none of the well-known orthogonal polynomials.
Thanks in advance!
Physicist —Preceding unsigned comment added by 212.14.57.130 (talk) 11:25, 12 November 2010 (UTC)
- Assuming W−1=0 (which does not really have degree −1), and W0=1, the trick is to solve the recurrence relation for k≥1 to get a closed representation for Wk. Study recurrence relation. Bo Jacoby (talk) 16:01, 12 November 2010 (UTC).
- With those initial conditions I get the following explicit formula:
- where
- with the sum being taken over all satisfying and
- where S(0, n, k) is taken to be 1. Also,
- In words, S(i, n, k) is the polynomial whose terms are all the monomials formed by evaluating A at i (integer) values and multiplying those evaluations, where those values are at least k apart and are between 1 and n. In the case of n=0, this is just S(0, -1, 2) x^0 = 1. For n=1, this is S(0, 0, 2) x^1 = x. For n=2, this is S(0, 1, 2) x^2 + S(1, 1, 2) x^0 = x^2 + A(1). Less trivially, for n=6, this is
- The polynomials S(i, n, k) are similar to the elementary symmetric polynomials. The term S(1, n, k) simplifies in your case significantly; the others might as well. Perhaps someone else knows more. Also, if I were you, I would double-check my algebra. 67.158.43.41 (talk) 17:45, 12 November 2010 (UTC)
- With those initial conditions I get the following explicit formula:
vocabulary of curly things
Euler's spiral is the curve in which κ′, the first derivative of curvature (=second derivative of the tangent angle) with respect to arc-length, is constant. Is there a more concise name for κ′?
Another word for that spiral is clothoid. Is this word also applied to other curves in which κ′ is continuous (e.g. polynomial) but not constant? If not, is there a more general word? (I'm experimenting with such curves.) —Tamfang (talk) 20:05, 12 November 2010 (UTC)
- Possibly Tortuosity but I don't think there is any well established term.--Salix (talk): 20:53, 12 November 2010 (UTC)
- To the second question, [1] has "polynomial spiral". —Tamfang (talk) 22:29, 12 November 2010 (UTC)
- In some engineering applications (vibration analysis in particular), strain is equivalent to the first derivative of curvature. Though this is a bit specialised, you could use the word for conciseness, not forgetting to define it if communicating with someone else.→86.132.164.178 (talk) 14:34, 13 November 2010 (UTC)
November 13
Infinitely differentiable implies polynomial
Hello everyone,
I am attempting to prove the following:
Let f: infinitely differentiable such that - then f must be a polynomial.
How would I get started on this - is it best to try to prove it directly or to use some more advanced tools to prove the result? I'm comfortable with the majority of analysis, certainly up to an undergraduate level at least (Banach spaces etc. are all fine!) - I have spent quite a while on this but I can't see anything obvious - proof by contradiction, for example? Doesn't appear like there's any obvious solution to me!
Any help or guidance would be hugely appreciated, thanks! Spalton232 (talk) 02:15, 13 November 2010 (UTC)
- Can't you just integrate f^(n) n times? Is there something I'm overlooking? 67.158.43.41 (talk) 03:53, 13 November 2010 (UTC)
- How about, using Taylor expansions? Since f is infinitely differentiable, it has a Taylor expansion. Since after a finite n, the derivatives are all zero so the Taylor expansion is finite, which means that it is a polynomial (of degree n). -Looking for Wisdom and Insight! (talk) 03:56, 13 November 2010 (UTC)
- Yeah, you've proved that every Taylor Series of the function is a polynomial... That doesn't really seem to help much wrt the original function, though, since there's no assumption that it's analytic.
- One thing that's clearly crucial here is the fact that you have a finite Taylor expansion around every x; if it was just around zero, then the function for nonzero x, 0 for x=0, would be a counterexample. --COVIZAPIBETEFOKY (talk) 04:30, 13 November 2010 (UTC)
- Obviously COVIZAPIBETEFOKY ment to write . Bo Jacoby (talk) 10:16, 13 November 2010 (UTC).
- Erm, yes. Sorry, got a bit bogged down by the symbols; I haven't written formatted text on wp in a while... --COVIZAPIBETEFOKY (talk) 14:11, 13 November 2010 (UTC)
- Obviously COVIZAPIBETEFOKY ment to write . Bo Jacoby (talk) 10:16, 13 November 2010 (UTC).
- How about, using Taylor expansions? Since f is infinitely differentiable, it has a Taylor expansion. Since after a finite n, the derivatives are all zero so the Taylor expansion is finite, which means that it is a polynomial (of degree n). -Looking for Wisdom and Insight! (talk) 03:56, 13 November 2010 (UTC)
- Certainly if the quantifiers were reversed, this would be simple; just integrate n times. So the goal is to reverse the quantifiers. My first instinct is the Baire category theorem. Let be the set of such that . Since the reals are a Baire space (as is every open subset of the reals), one of the has non-empty interior. In fact, this holds on every neighborhood. So we have densely many intervals on which the function is a polynomial. Unfortunately, I don't see how to put this together to get the final result.--203.97.79.114 (talk) 04:37, 13 November 2010 (UTC)
- Nevermind, I was overdoing it. I will use the following fact: if two polynomials agree on an infinite number of points, they are the same polynomial. From this, it follows that it suffices to prove the result for arbitrary closed intervals. So fix one.
- Let be the set of such that . Then is an open set, and by hypothesis for covers the interval for any . By compactness, some finite collection of do. This lets us show that the remainder term of the finite taylor series goes to zero. So is analytic on the interval. As noted above, this suffices.--203.97.79.114 (talk) 05:11, 13 November 2010 (UTC)
- Gah, forget I said anything. Not sure what I was thinking when I said we could show the remainder term gets small.--203.97.79.114 (talk) 05:19, 13 November 2010 (UTC)
- Let be the set of such that . Then is an open set, and by hypothesis for covers the interval for any . By compactness, some finite collection of do. This lets us show that the remainder term of the finite taylor series goes to zero. So is analytic on the interval. As noted above, this suffices.--203.97.79.114 (talk) 05:11, 13 November 2010 (UTC)
- I'd iterate backwards, show that if f'(x) is always zero then f(x) is a constant, then that if the first derivative is always a constant then the function is linear, if linear the function is a quadratic etc. Dmcq (talk) 09:45, 13 November 2010 (UTC)
- You're missing that the quantifiers are reversed. There isn't (a priori) a single n for all x. Rather, every x has an n. So we can't assume some derivative is identically 0.--203.97.79.114 (talk) 10:42, 13 November 2010 (UTC)
- Sorry yes I see you pointed that out again above, I should really read things properly. Okay it needs considerably more thought. Dmcq (talk) 10:49, 13 November 2010 (UTC)
- As above for any closed finite interval n must be bounded and therefore we can get a polynomial there. The whole line can be covered by overlapping intervals. Where the intervals overlap the polynomials must be the same. Therefore there is one polynomial covering the whole line. Hope I haven't missed out something this time. Dmcq (talk) 10:59, 13 November 2010 (UTC)
- It's not obvious why, on any finite interval, we should be able to find a fixed n such that the nth derivative is zero. The best you can easily do is what 203.97.79.114 did above: use Baire to show that there is some interval on which this holds. It's not clear how to extend this. Algebraist 11:04, 13 November 2010 (UTC)
- The 'closed' is important, there can be no limit point in a closed interval where n goes to infinity if it is bounded at each point. Dmcq (talk) 11:10, 13 November 2010 (UTC)
- Except n need not be infinite at the limit point. For appropriate , the limit of is 0 in every derivative.--203.97.79.114 (talk) 11:14, 13 November 2010 (UTC)
- Thanks, now that's why I should have thought about it a bit more. :) And one could have a limit point in every finite interval which is nasty. Dmcq (talk) 11:29, 13 November 2010 (UTC)
- Now you see why I struggled! I have a feeling Baire Category theorem is very much along the lines of the approach we're meant to use though, as it's something we only learned recently - could the Stone-Weierstrass theorem be of use, perhaps? Or maybe the Tietze extension theorem? These are the two other results which look relevant to me: particularly the former, though again I can't see how to apply it directly to achieve the result - though hopefully one of you might be able to! It certainly looks pertinent... Spalton232 (talk) 14:00, 13 November 2010 (UTC)
- Thanks, now that's why I should have thought about it a bit more. :) And one could have a limit point in every finite interval which is nasty. Dmcq (talk) 11:29, 13 November 2010 (UTC)
- Except n need not be infinite at the limit point. For appropriate , the limit of is 0 in every derivative.--203.97.79.114 (talk) 11:14, 13 November 2010 (UTC)
- The 'closed' is important, there can be no limit point in a closed interval where n goes to infinity if it is bounded at each point. Dmcq (talk) 11:10, 13 November 2010 (UTC)
- It's not obvious why, on any finite interval, we should be able to find a fixed n such that the nth derivative is zero. The best you can easily do is what 203.97.79.114 did above: use Baire to show that there is some interval on which this holds. It's not clear how to extend this. Algebraist 11:04, 13 November 2010 (UTC)
- You're missing that the quantifiers are reversed. There isn't (a priori) a single n for all x. Rather, every x has an n. So we can't assume some derivative is identically 0.--203.97.79.114 (talk) 10:42, 13 November 2010 (UTC)
- My intuition is that if you take an interval [a, b] on which f^(m) (x) = 0 for m >= n, where the interval is as large as possible for fixed n, some point left of a (but close to a) or right of b (but close to b) must have non-zero kth derivative for infinitely many k. This would be a contradiction, so that the interval must have been R, which causes the result to follow as above. A non-analytic smooth function enjoys this property, at least. The existence of an interval [a, b] follows from Baire as noted above, which also lends plausibility. Regardless, I wouldn't suggest my intuition if there seemed to be fruitful approaches at hand. 67.158.43.41 (talk) 02:08, 14 November 2010 (UTC)
- What is the problem with my taylor polynomial approach above? Yes, I have shown that every Taylor expansion is a polynomial but this function is analytic as the remainder term goes to zero as n goes to infinity because after a finite n, all of the derivatives are zero. So the function is equal to its taylor expansion. Right? -Looking for Wisdom and Insight! (talk) 05:19, 15 November 2010 (UTC)
- The derivative forms of the Taylor remainder all involve the n+1th derivative of f at some point close to x, depending on n. So there's no reason the remainder should go to zero unless there's an n such that the nth derivative is 0 in a neighbourhood of x, which is the same problem again. Algebraist 11:40, 15 November 2010 (UTC)
- If you look at Smooth function you'll see a function which is infinitely differentiable and where all the derivatives at one point are zero - yet it is not not polynomial. The bump functions are like this. Dmcq (talk) 08:26, 15 November 2010 (UTC)
- By the way I was just looking at Sard's theorem. It implies the value of the last term before the zeroes must form a set of measure zero, so if that value of this last term changes between two points there must be higher order polynomials in between. The problem with going down this route is as above - one can always approach infinity without necessarily getting there. Dmcq (talk) 08:46, 15 November 2010 (UTC)
- From my perspective, the strongest result we have here so far (this seems to be a collaborative effort now!) is that there is an interval on which for some n, m > n implies the m-th derivative is identically 0 on that interval, so then f must behave like a polynomial on that interval, and outside that interval there must be some nearby point on which the derivatives are not identically 0 after some sufficiently large m - does this lead us to a contradiction necessarily? Spalton232 (talk) 01:21, 16 November 2010 (UTC)
- I'm trying to imagine a counterexample, and where it must go wrong, but I can't tell, nor can I come up with an explicit formula of a function with these properties. Maybe someone can see why this function can't actually exist: Consider a sort of bump function f with support [0,1], which is identically 1 on [1/3,2/3], so then f' has support [0,1/3] ∪ [2/3,1] and let it be equal to some positive constant on the interval [1/9,2/9] and some negative constant [7/9,8/9]. Then f" has support [0,1/9] ∪ [2/9,1/3] ∪ [2/3,7/9] ∪ [8/9,1] and let it be some positive constant on [1/27,2/27] and [25/27,26/27] and a negative constant on [7/27,8/27] and [19/27,20/27], etc. Rckrone (talk) 04:28, 16 November 2010 (UTC)
- From my perspective, the strongest result we have here so far (this seems to be a collaborative effort now!) is that there is an interval on which for some n, m > n implies the m-th derivative is identically 0 on that interval, so then f must behave like a polynomial on that interval, and outside that interval there must be some nearby point on which the derivatives are not identically 0 after some sufficiently large m - does this lead us to a contradiction necessarily? Spalton232 (talk) 01:21, 16 November 2010 (UTC)
- I believe you're getting at my "intuition" above. I wasn't able to find an obvious contradiction but then again I haven't been able to construct a counterexample function, i.e. one with nth derivative identically zero on a closed interval and each point off that interval nonzero at only finitely many derivatives. I almost suspect the question is false, the counterexample is awful, and the quantifiers are just reversed; then again maybe we're all just missing something. 67.158.43.41 (talk) 04:35, 16 November 2010 (UTC)
- Ok, I've got some actual values now to define this function piece-wise. Let c0 = 1, and cn = (3n+1/2)cn-1 for n > 0. Then f(n)(x) = ±cn on the intervals chosen how I did before (so f(0) = 1 on [1/3,2/3], f(1) = 9/2 on [1/9,2/9] and f(1) = -9/2 on [7/9,8/9], etc). I guess you could describe the intervals as [0.a1...an1,0.a1...an2] where these are ternary representations with all ai equal to 0 or 2, and then the sign there corresponds to the number of 2s. In other words the intervals you remove at each step in constructing the Cantor set (except closed rather than open). The gaps are filled in by the higher derivatives. I think that should work out, and I think it's a counterexample. Rckrone (talk) 05:47, 16 November 2010 (UTC)
- I tried to construct a counter-example using a similar approach, but there's a problem: you never make any definition on elements of the Cantor set which are not endpoints of intervals -- the majority of the Cantor set. Since you have already defined it on a dense set, you can extend the function to the unique continuous extension, but now you need to prove that all the derivatives exist at these new points and that the derivatives are eventually 0.--203.97.79.114 (talk) 07:41, 16 November 2010 (UTC)
- f(x) is infinitely differentiable. In particular, that implies that all derivatives change smoothly. If there exists a finite N such that for any continuous interval [a, b] of non-zero measure then it implies . To do otherwise would imply a kink in at least one of the higher order derivatives of f(x), and hence a point that was not infinitely differentiable.
- Hence, either there exists a finite bound N everywhere, or there exists an arbitrarily large n in every interval no matter how small. The latter possibility can be excluded because f(x) is infinitely differentiable. Therefore n is bounded. Knowing that a upper-bound N exists everywhere such that it follows immediately that , etc., which allows one to conclude that f(x) is a polynomial. Dragons flight (talk) 08:31, 16 November 2010 (UTC)
- Bump functions are a counter-example to your first paragraph.--203.97.79.114 (talk) 11:29, 16 November 2010 (UTC)
- No, the bump transition requires an infinite series of non-zero derivatives. Sorry if this was too implicit. If you have a interval [a, b] where all derivatives m > N equal zero, but in an neighborhood adjacent to [a, b] the derivative m* > N is changing, then continuity and smoothness imply that all derivatives m > m* must also start changing over that neighborhood. This implies at least an interval adjacent to [a, b] where the derivative tower is everywhere unbounded, which contradicts the assumptions of the problem. You can't have a bump function with a bounded derivative tower. Dragons flight (talk) 19:05, 16 November 2010 (UTC)
- How do continuity and smoothness imply that? That would solve the problem, but I can't immediately see the implication. Algebraist 19:11, 16 November 2010 (UTC)
- Neither do I. It's impossible to have two different polynomials on adjacent intervals such that their union is infinitely differentiable, that much is clear. Hence if b is chosen so that f(m) = 0 on [a,b], but not on any larger interval, then for every k, b is a limit point of the set {x > b: f(k)(x) ≠ 0}. However, I do not see why this should imply the existence of a single x such that f(k)(x) ≠ 0 for infinitely many k.—Emil J. 19:39, 16 November 2010 (UTC)
- How do continuity and smoothness imply that? That would solve the problem, but I can't immediately see the implication. Algebraist 19:11, 16 November 2010 (UTC)
- No, the bump transition requires an infinite series of non-zero derivatives. Sorry if this was too implicit. If you have a interval [a, b] where all derivatives m > N equal zero, but in an neighborhood adjacent to [a, b] the derivative m* > N is changing, then continuity and smoothness imply that all derivatives m > m* must also start changing over that neighborhood. This implies at least an interval adjacent to [a, b] where the derivative tower is everywhere unbounded, which contradicts the assumptions of the problem. You can't have a bump function with a bounded derivative tower. Dragons flight (talk) 19:05, 16 November 2010 (UTC)
- Let [a,b] be an interval such that . Let c not in [a, b] and m* > N be such that .
- By virtue of smoothness and without loss of generality, we can choose c such that for all x in (b, c], . Since is changing, it implies is non-zero over at least some sub interval (b, d] within (b, c]. The observation that this latter subinterval must also start at b is a consequence of the fact that .
- This allows one to build a tower of intervals
- Further we can choose cm such that .
The infinite intersection of a nested series of non-empty intervals must be non-empty (containing at least one point, in general). Therefore, . Which is our contradiction.Dragons flight (talk) 19:59, 16 November 2010 (UTC)
- Hmmm, brain fart. The infinite intersection of open nested sets can be empty. For example if An = (0, 1/n]. Dragons flight (talk) 20:14, 16 November 2010 (UTC)
- It's also not obvious to me why you can choose a c as claimed.--130.195.5.7 (talk) 22:21, 16 November 2010 (UTC)
- The Baire category theorem isn't something I've studied, I must have a look at it, but above it is asserted it says there must be an open set where n is bounded and therefore the function is polynomial in some open set somewhere. Am I reading that right - it sounds a very strong result. However if the end points of such a set are not polynomial you'd have a discontinuity so any such set can be closed. If we construct a function by collapsing all these closed sets where we have a polynomial and join up all the end points then we should end up with another smooth function where every point has a finite number of non-zero differentials - but has no open sets where we have a polynomial or they only occupy a finite length either way it looks like a contradiction with the original business of always being able to find an opens set. Is that application of the Baire Category Theorem right? If so then you're about there I believe. Dmcq (talk) 12:01, 16 November 2010 (UTC)
- It's true that the Baire theorem implies that there's a positive-length interval on which f is a polynomial. In fact, it implies that every open interval has a positive-length subinterval on which f is a polynomial. It's clear that the set of points where f matches some given polynomial is closed, so these intervals can always be taken to be closed and unextendable. I don't understand the rest of your argument. Algebraist 12:05, 16 November 2010 (UTC)
- Possibly for the very good reason that what I was thinking of was not well thought through and was wrong or not quite there. You'd have to adjust the other end by the slope and height etc and with an dense set of such polynomials you'd have no guarantee that removing a bunch of them wouldn't send the other end off to infinity. Dmcq (talk) 12:14, 16 November 2010 (UTC)
- Ulp, it has struck me one can even take away positive length intervals from every single little interval without having the remainder of measure zero. Dmcq (talk) 13:56, 16 November 2010 (UTC)
- Yes. There are open dense subsets of the real line of arbitrarily small positive measure. Algebraist 15:16, 16 November 2010 (UTC)
- Ulp, it has struck me one can even take away positive length intervals from every single little interval without having the remainder of measure zero. Dmcq (talk) 13:56, 16 November 2010 (UTC)
- Possibly for the very good reason that what I was thinking of was not well thought through and was wrong or not quite there. You'd have to adjust the other end by the slope and height etc and with an dense set of such polynomials you'd have no guarantee that removing a bunch of them wouldn't send the other end off to infinity. Dmcq (talk) 12:14, 16 November 2010 (UTC)
- It's true that the Baire theorem implies that there's a positive-length interval on which f is a polynomial. In fact, it implies that every open interval has a positive-length subinterval on which f is a polynomial. It's clear that the set of points where f matches some given polynomial is closed, so these intervals can always be taken to be closed and unextendable. I don't understand the rest of your argument. Algebraist 12:05, 16 November 2010 (UTC)
- The Baire category theorem isn't something I've studied, I must have a look at it, but above it is asserted it says there must be an open set where n is bounded and therefore the function is polynomial in some open set somewhere. Am I reading that right - it sounds a very strong result. However if the end points of such a set are not polynomial you'd have a discontinuity so any such set can be closed. If we construct a function by collapsing all these closed sets where we have a polynomial and join up all the end points then we should end up with another smooth function where every point has a finite number of non-zero differentials - but has no open sets where we have a polynomial or they only occupy a finite length either way it looks like a contradiction with the original business of always being able to find an opens set. Is that application of the Baire Category Theorem right? If so then you're about there I believe. Dmcq (talk) 12:01, 16 November 2010 (UTC)
Define n(x) to be the smallest integer such that for . Then, on any closed interval the function n(x) has a maximum. So, f(x) is a polynomial on any closed interval. Count Iblis (talk) 16:49, 16 November 2010 (UTC)
- Why should n have a maximum on any closed interval? Algebraist 16:50, 16 November 2010 (UTC)
- Ah, I see the problem :) . I don't have good answer yet. I thought about the following: Suppose that n(x) doesn't have an upper bound. Then there exists a converging sequence such that . Call the limit point y. Then consider the sequence of functions . There then exist intervals containing the point such that is nonzero on the interval. I was then thinking about cooking up a contradiction between the and all higher derivatives being zero, and the intervals shrinking in size faster than 19:28, 16 November 2010 (UTC)
- Suppose you get an upper bound in any closed interval. What's to say those upper bounds don't tend to infinity as the size of the interval increases?
- Actually I think I just solved that one. Take the interval [0, 1], get a least upper bound, so f is a degree N polynomial on [0, 1]. Do the same to [-1, 0], giving a degree M polynomial. For the two polynomial's derivatives to agree at 0, N=M, which also forces them to be the same polynomial. Extend the new interval [-1, 1] in a similar manner inductively to show that f is a degree N polynomial. 67.158.43.41 (talk) 22:18, 16 November 2010 (UTC)
- Ok, so only the upper bound on n(x) needs to be proved. Let's try something different. Define to be the support of the function . If f(x) is not a polynomial, then all the are dense in R. Let U be the intersection of all the . Baire's theorem says that the intersection of any countable collection of dense open sets is dense, so U is dense in R. Then consider n(x) for some x in U. It was assumed in the problem specification that n(x) is a finite number for all x in R. However, x being in U implies that n(x) has to be larger than any finite number. Count Iblis (talk) 00:46, 17 November 2010 (UTC)
- You'll need to justify why f(x) not being a polynomial implies the are dense.--130.195.5.7 (talk) 00:59, 17 November 2010 (UTC)
- Yes, I was getting ahead of the argument way too fast again :). Let's try this. Suppose for some n a is not dense in R. Then there exists an open set on which the nth derivative of f is zero. So, f is a polynomial on that open set. Then consider the complement of that open set. Either all the are dense in that set (in which case we get a contradiction per the above argument using Baire's theorem and f is a polynomial on the entire set), or there again exists an open set where you can represent f as a polynomial. So, this suggests that a transfinite induction argument can be set up to show that for any x there exists an open set V such that x is in the closure of V and such that f is a polynomial on V. Then we're done because n(x) is bounded on all these open sets as f is a polynomial on them. So, Dr. IP's argument above applies to join them together. Count Iblis (talk) 18:32, 17 November 2010 (UTC)
- I'm not totally convinced the intervals that your argument supplies have to be nice enough to be glued together the way Dr. IP is suggesting to cover the whole real line. For example, if we start with the interval [-1,0], and we have the intervals [1/(n+1),1/n] to choose from, how can we extend [-1,0] past 0? Rckrone (talk) 21:02, 17 November 2010 (UTC)
- Yes, I was getting ahead of the argument way too fast again :). Let's try this. Suppose for some n a is not dense in R. Then there exists an open set on which the nth derivative of f is zero. So, f is a polynomial on that open set. Then consider the complement of that open set. Either all the are dense in that set (in which case we get a contradiction per the above argument using Baire's theorem and f is a polynomial on the entire set), or there again exists an open set where you can represent f as a polynomial. So, this suggests that a transfinite induction argument can be set up to show that for any x there exists an open set V such that x is in the closure of V and such that f is a polynomial on V. Then we're done because n(x) is bounded on all these open sets as f is a polynomial on them. So, Dr. IP's argument above applies to join them together. Count Iblis (talk) 18:32, 17 November 2010 (UTC)
- You'll need to justify why f(x) not being a polynomial implies the are dense.--130.195.5.7 (talk) 00:59, 17 November 2010 (UTC)
I think this can be dealt with using transfinite induction. We know that every x in R is in an interval on which f is a polynomial (for the argument we need to drop the fact that we can chose these to be closed intervals). We can define a partial ordering on the set of all such representations of f by declaring that representation R1 is larger than or equal to representation R2 if all the intervals used by R1 are unions of those of R2, or closure of unions of subsets. Then there exists a maximal totally ordered subset of the set of all the representations, which we know is nonempty. From any totally ordered set of representations, we can construct a representation as follows. For any point x we take the union of all the sets that have x as a member from the representations in the totally ordered set. So, we then have an unambiguous prescription to partition R into intervals on which f is a polynomial.
If we then take the maximal totally ordered subset and construct the representation of f from that, we get a representation that is larger or equal to all the elements in the totally ordered subset. Since this subset is maximal, it is an element of this set that cannot be extended. That then imples that it contains R as its only interval. Count Iblis (talk) 18:20, 18 November 2010 (UTC)
- "We know that every x in R is in an interval on which f is a polynomial " How do we know this? Once we have that fact, the result follows immediately from the fact that the reals are not a countable union of closed intervals, except in the trivial fashion.--203.97.79.114 (talk) 21:55, 18 November 2010 (UTC)
- countable union of disjoint closed intervals--203.97.79.114 (talk) 21:57, 18 November 2010 (UTC)
That statement was not derived rigorously and I see now that it isn't actually true. Let's start right from the start again and do everything rigorously. Let V be some arbitrary open interval and C its closure. We put Then define the sets . The closures of these sets are the supports of the , but we don't want to take the closure. Now X is an open set, and it follows from the continuity of the that the are open sets. Define , which are clearly open sets. Then suppose that all the are dense in C. Then Baire's theorem says that the union intersections of all the is also dense in C (Baire's theorem applies because all the are open and because C is a complete metric space). But then we get a contradiction because at a point x in the union of all the , all the derivatives of f would be nonzero, which contradicts the problem statement.
So, we conclude that given an arbitrary open interval V, there always exists an n such that is not dense in V. This means that any arbitrary open interval V contains an open interval on which , so f is a polynomial there. Then given an arbitrary point x, any neighborhood of x will contain such an interval, so either x is inside such an interval or it is arbitrary close to one.
Let's define the set S to be the set of all these intervals. The union of all the elements of this set is dense in . There can, of course, be different such sets of intervals that represent the function as polynomials on each interval. We shall call such a set a representation of f. We can define a partial ordering on the set of all such representations of f by declaring that the representation S1 is larger than or equal than represenation S2, if all the intervals in S1 are unions of those of S2, or closure of unions. Then there exists a maximal totally ordered subset of the set of all the representations of f, which we know is nonempty. From any totally ordered set of representations, we can construct a representation as follows. For any point x we take the union of all the sets that have x as a member from the representations in the totally ordered set. So, we then have an unambiguous representation of f.
If we then take the maximal totally ordered subset and construct the representation of f from that, we get a representation that is larger or equal to all the elements in the totally ordered subset. Since this subset is maximal, it is an element of this set that cannot be extended. That then imples that it contains as its only interval. Count Iblis (talk) 01:56, 19 November 2010 (UTC)
- When you apply Baire's theorem, you meant "intersection" instead of "union", but that's a minor thing. In your maximal sets argument, I don't understand what you mean by "representation"--a set of open sets where on a particular open set f is 0 at some derivative? You seem to be using Zorn's lemma. To be clear, it just provides an element which is maximal amongst all comparable elements--not necessarily amongst all elements, since some may not be comparable to it. This allows multiple maximal elements. I also don't understand the reasoning in "Since this subset is maximal, it is an element of this set that cannot be extended. That then imples that it contains as its only interval." 67.158.43.41 (talk) 07:33, 19 November 2010 (UTC)
- Yes union should be intersection, you can tell that it has been a long time that I passed the functional analysis exam and haven't been doing this kind of stuff since that time :) . I was first using the idea that you can repeatedly use the Baire argument so that starting with and removing open intervals, you end up with a set where you can't find any open intervals anymore. This may require a separate transfinite induction argument to justify rigorously. What you then have is a collecton of open intervals, the union of which is dense in and such that on each open interval in this collection, f is a polynomial. I was then using Hausdorf's maximality theorem to set up a transfinite induction argument (So, if we have a partially ordered set, then this will contain a totally ordered subset which is maximimal). A representation of f is any collection of intervals, the union of which is dense on such that on each interval f is a polynomial. So, the set of all representations is nonempty.
- Then we define a partial ordering on this set of representations. If one representation S1 can be obtained from another representation S2 by merging intervals in S2, then we say that
S2 > S1. Then we consider a maximal totally ordered subset of all representations. On any representation, one may attempt to use your argument to combine intervals to define the function as a single polynomial on a larger interval which would lead to a representation which is larger in the sense of the defined partial ordering. But this cannot work for the maximal element (that maximal element needs to be constructed from the maximal totally ordered subset), otherwise the totally ordered subset of representations was not maximal. I the claim that this "maximal representation" cannot contain some interval of finite length, but perhaps this needs to be justified rigorously... Count Iblis (talk) 16:51, 19 November 2010 (UTC)
- You're working way too hard to establish this maximal set. f equaling a polynomial is a closed property, so any interval on which f equals a polynomial is contained in a maximal such interval. Further, if two such intervals intersect, their union is such an interval. So any interval on which f equals a polynomial is contained in a unique maximal such interval. So take the collection of these maximal intervals. This is the unique maximal element from your partial order.
- Next, you're trying to argue that if this collection isn't simply , you can contradict maximality. Why should this be so?--203.97.79.114 (talk) 01:28, 20 November 2010 (UTC)
- I'm applying the maximality argument to the set of intervals (which cover some dense subset of ), not to the intervals themselves. Otherwise we bump into Rckrone's counterexample given above. The intervals on the right of the origin are contained in the maximal interval ]0,1], so after havng found themaximal intervals, we wouldn't be done. Then while the maximal intervals in this case can e combined in a single step, note that Rckrone's argument can just as well used to produce an infinite sequence of infinite sequences of ever shrinking intervals, so that the set of maximal elements you end up with contains an infinite sequence of shrinking intervals. But, in my case, maximality refers to the set of intervals and it thus means that there are no intervals at all that can be extended. Count Iblis (talk) 16:09, 20 November 2010 (UTC)
If n(x) is constant, then the result is trivially a polynomial everywhere. Suppose n(x1) < n(x2), that implies that changed value at some point during the interval [x1, x2]. But that implies its derivative also changed value at some point during the interval. Which means . At which point one can repeat the same argument for [x1, x3], [x3, x2], and get even higher n, etc. n(x) can be constant over finite intervals (for example by having intervals where f(x) is a polynomial), but between those intervals would seem to be dense regions of arbitrarily large n. I'm pretty sure that implies a contradiction with the assumptions of smoothness, but I'm not yet sure how to finish it. Dragons flight (talk) 00:57, 17 November 2010 (UTC)
- More to the point there has to be arbitrarily large derivatives in those intervals which must I'd have though conflict with the idea of having a derivative there - but I can't quite get there either. Dmcq (talk) 01:07, 17 November 2010 (UTC)
Calculus in a nutshell
Can you please give me a very nice definiton of calculus? Say that I was on an ambulance stretcher and only have a minute or two to live, but instead of calling a priest for my last rites, I called a someone from the wikipedia math desk, because my last dying wish was to understand the concept of calculus. How would you explain it to me?
Also, is calculus similar to thought patterns, or rhetoric? Can knowing calculus really make your logos argument better?AdbMonkey (talk) 23:26, 13 November 2010 (UTC)
- Two minutes? Can't be done. I'd tell you that calculus is fields of Elysium, with soft breezes and harp music, and you should now go there to rest. --Trovatore (talk) 23:45, 13 November 2010 (UTC)
- Do you have any confusion with the first paragraph of the article Calculus? "Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. ... It has two major branches, differential calculus and integral calculus ... Calculus is the study of change" Or the definition from Wiktionary [2] "Differential calculus and integral calculus considered as a single subject". Differential calculus is "concerned with the study of the rates at which quantities change", (e.g. the slope of (curved) lines), and integral calculus is concerned with the area/volume enclosed by curves. The Fundamental theorem of calculus says that these two different techniques, differentiation (slope-finding) and integration (area-finding), are inverses of each other. I don't really understand your final question, except to say that there isn't anything particular about the thought patters involved in calculus which aren't also involved in other fields of advanced mathematics. -- 174.21.243.119 (talk) 00:02, 14 November 2010 (UTC)
- How about this: "Calculus is the practical solution to Zeno's paradox, extended to arbitrary dimensions." Calculus is closer to geometry than to logic, and good visualization skills will serve you better than abstract logic in most cases. --Ludwigs2 00:09, 14 November 2010 (UTC)
- If I run at 3 miles per hour for 1 hour, I will have gone 3 miles. If instead I start at 0 miles per hour and speed up to 6 miles per hour smoothly, by the end of 1 hour how long will I have gone? Calculus is the branch of math that answers this question and many other related questions. (You seem to have accepted that the definition will be inadequate, so I give an inadequate one that I think captures the start of calculus.) I'm not really qualified to say if your "logos argument" will be better from knowing calculus. I strongly believe studying math makes your reasoning sharper in general. I would suggest studying real analysis, since "calculus" often means a non-rigorous approach, and many arguments from analysis are very subtle, requiring absolutely flawless reasoning to overcome a lack of intuition. 67.158.43.41 (talk) 02:29, 14 November 2010 (UTC)
Yes, unsigned number, I have a lot of confusion about calculus, which is why I asked the question. I was hoping someone could dumb it down for a dummy like me, because my brain starts to aneurysm when I read the wiki article on calculus. Now, if you'll excuse me, I have to finish eating my bowl of paintchips. AdbMonkey (talk) 02:47, 14 November 2010 (UTC)
- I once had a book years ago that described the essense of it in an understandable way in three or four pages, but as so many books have similar titles I cannot specifically identify it. There's also Calculus Made Easy available online for free, and I found this: http://www-math.mit.edu/~djk/calculus_beginners/ 92.15.7.155 (talk) 16:43, 14 November 2010 (UTC)
- Well, it's fine to recognize your confusion and try to address it, but you won't get there by trying to put things in a nutshell that don't fit in a nutshell. Find a calculus book and start working through it, ideally one that emphasizes physical intuition. Avoid like the plague the texts that are mainly aimed at teaching you algorithms. --Trovatore (talk) 03:00, 14 November 2010 (UTC)
- If you want to get more into the meat of what sort of issues calculus deals with here on wikipedia, you might check out limit (mathematics) and derivative. Rckrone (talk) 03:24, 14 November 2010 (UTC)
- Might I recommend Spivak's Calculus to the OP? 24.92.78.167 (talk) 03:26, 14 November 2010 (UTC)
- ... or, for a more informal and discursive presentation, you could try A Tour of the Calculus by David Berlinski (but be warned that Berlinski's idiosyncratic style of writing is not to everyone's taste). Gandalf61 (talk) 12:42, 14 November 2010 (UTC)
- Might I recommend Spivak's Calculus to the OP? 24.92.78.167 (talk) 03:26, 14 November 2010 (UTC)
- Calculus is all about calculating. The things that are trivial to calculate involve adding up and/or multiplying finite amounts of numbers. This is what we learn in primary school. Calculus focusses on finding answers to sums that would involve an infinite number of elementary operations when directly evaluated. Count Iblis (talk) 04:24, 14 November 2010 (UTC)
- Calculus in a nutshell. A Boeing engineer submits specifications for an airplane can decelerate continuously from its maximum landing speed of 100 miles per hour to 0 by applying it's maximum safe breaking, in 15 seconds. Is that fast enough to stop by the end of the runway? This is a calculus question, because the facts you know are about acceleration (change in speed), but the facts you want are about distance. This is what calculus is used for. To get from speed to distance, from acceleration to speed, from deceleration to distance. If you ever do engineering, it is important. I, as a computer engineer, have never once used calculus. I know this for a fact, because I failed it and didn't learn anything in that class, so I couldn't possibly have used it afterwards. It's just not important enough for me, and, being mildly retarded and dyslexic, I have a lot of trouble with symbols and abstract math. This is why I program in visual basic most of the time, which does not require it. 213.219.143.113 (talk) 08:39, 14 November 2010 (UTC)
- Try #2: At it's core, calculus deals with problems by breaking them into an infinite number of infinitely small steps. Derivatives (slope-finding) work by breaking a curve into an infinite number of infinitely small straight lines, each of which you can find the slope of. Integration (area-finding) works by breaking a curved area into an infinite number of infinitely small rectangles, each of which you can find the area of. Calculus tells us that infinite sums (infinite series) can sum to finite numbers, if the terms in the series eventually get infinitely small (e.g. 1/2 + 1/4 + 1/8 + 1/16 .... = 1). -- 174.21.243.119 (talk) 19:30, 14 November 2010 (UTC)
- Newton described his calculus in terms of fluxions (derivatives) and fluents (integrals). You may be interested in glancing at Method_of_Fluxions, and the full-text links within. Newton was very motivated by physics, but calculus is much more general, and physics examples strike many as dry. One (very informal, graphical) way of thinking of the fundamental theorem of calculus is that differentiation is akin to `zooming in' on a curve, until it appears as a straight line. Integration is like zooming out until the filled area under a curve looks like a rectangle. Differentiation is focusing/ sharpening, integration is blurring/ averaging. As such, they are opposite and inverse actions. The basic concepts of calculus can be explained very informally, but we need rigor and formality if we want to use it to make airplanes. SemanticMantis (talk) 20:06, 14 November 2010 (UTC)
- I don't understand taking integration as zooming out, differentiation as focusing/sharpening, or integration as blurring/averaging. Would you mind explaining a bit more? It might be interesting. I've always thought of the fundamental theorem of calculus as "where you are is the sum of where you've been heading", but that probably requires more explanation to be very descriptive. 67.158.43.41 (talk) 20:15, 14 November 2010 (UTC)
- First, I think your quote is in the spirit of the position being the integral of a velocity function. This is not the fundamental theorem, but an aspect of the integral alone. To (even paraphrase) the fundamental theorem, you have to identify the inverse operation. As for my analogy, the definite integral of a given function on [a,b] divided by |b-a| is an average value for f on that domain. Thus, it's fair to say loosely that integration is an averaging process. Perhaps smoothing is a better word than blurring. Consider a function F defined as the integral of some continuous function f. F is smoother than f, in the sense that it has at least as many continuous derivatives (an often more in practice). In particular, convolution integrals are a specific way of using one function to smooth out another, via averaging them together in local neighborhoods. SemanticMantis (talk) 23:23, 14 November 2010 (UTC)
- Blurring/averaging for integration I think I buy, and your explanation is what I expected, though I was somehow hoping for a more direct analogy. You didn't explain the others. I don't understand your criticism of my statement. The sum is taken in the limit of arbitrarily small intervals, and the summands are the y-distance traveled in each interval, given by derivatives. This is precisely the second part of the fundamental theorem of calculus. 67.158.43.41 (talk) 01:40, 15 November 2010 (UTC)
- I may have misunderstood your phrasing. This is why calculus is best treated formally. I just meant that the FTC essentially states that the antiderivative is the definite integral. It establishes identity between to these two formally distinct entities, and shows that integration and differentiation are inverse operations. To me, this is the key, and even loose descriptions should attempt to convey it. At the core, the derivative says something about a function at a point, while the definite integral describes behavior over a region. This is the basis of the focus/ blur analogy. As for my other analogies, that the derivative is a 'zoom in' process to a linear approximation is fairly evident from the limit definition of the derivative, and well-founded. Integral as 'zoom out' is not as good, but I think it works at a stretch. Picture sin(x)+10. In a smaller viewing window, it looks wiggly. Zoom out a bit, and the shaded area below the curve begins to resemble a rectangle, whose area can be computed without calculus. Thus, we can think finding slopes and areas as zooming in and out, which are obvious inverse processes. Shaky, I know. Just trying to give some indication of what it's all about. Hope this helps. For a true correct understanding, there's no substitute for a rigorous and formal treatment. SemanticMantis (talk) 02:13, 15 November 2010 (UTC)
- I can't quite agree on the zoom in/zoom out idea, though I certainly understand your desire to describe the derivative and integral symmetrically as physical inverses. I also agree formalizing these concepts is a fantastic idea for a "true" understanding, though I've known people whose only real problem with math was that they couldn't translate from "math speak" to their usual thought processes. For such people writing the symbols down is far less helpful than appealing directly to their intuition. 67.158.43.41 (talk) 05:12, 15 November 2010 (UTC)
- I may have misunderstood your phrasing. This is why calculus is best treated formally. I just meant that the FTC essentially states that the antiderivative is the definite integral. It establishes identity between to these two formally distinct entities, and shows that integration and differentiation are inverse operations. To me, this is the key, and even loose descriptions should attempt to convey it. At the core, the derivative says something about a function at a point, while the definite integral describes behavior over a region. This is the basis of the focus/ blur analogy. As for my other analogies, that the derivative is a 'zoom in' process to a linear approximation is fairly evident from the limit definition of the derivative, and well-founded. Integral as 'zoom out' is not as good, but I think it works at a stretch. Picture sin(x)+10. In a smaller viewing window, it looks wiggly. Zoom out a bit, and the shaded area below the curve begins to resemble a rectangle, whose area can be computed without calculus. Thus, we can think finding slopes and areas as zooming in and out, which are obvious inverse processes. Shaky, I know. Just trying to give some indication of what it's all about. Hope this helps. For a true correct understanding, there's no substitute for a rigorous and formal treatment. SemanticMantis (talk) 02:13, 15 November 2010 (UTC)
- Blurring/averaging for integration I think I buy, and your explanation is what I expected, though I was somehow hoping for a more direct analogy. You didn't explain the others. I don't understand your criticism of my statement. The sum is taken in the limit of arbitrarily small intervals, and the summands are the y-distance traveled in each interval, given by derivatives. This is precisely the second part of the fundamental theorem of calculus. 67.158.43.41 (talk) 01:40, 15 November 2010 (UTC)
- First, I think your quote is in the spirit of the position being the integral of a velocity function. This is not the fundamental theorem, but an aspect of the integral alone. To (even paraphrase) the fundamental theorem, you have to identify the inverse operation. As for my analogy, the definite integral of a given function on [a,b] divided by |b-a| is an average value for f on that domain. Thus, it's fair to say loosely that integration is an averaging process. Perhaps smoothing is a better word than blurring. Consider a function F defined as the integral of some continuous function f. F is smoother than f, in the sense that it has at least as many continuous derivatives (an often more in practice). In particular, convolution integrals are a specific way of using one function to smooth out another, via averaging them together in local neighborhoods. SemanticMantis (talk) 23:23, 14 November 2010 (UTC)
- I don't understand taking integration as zooming out, differentiation as focusing/sharpening, or integration as blurring/averaging. Would you mind explaining a bit more? It might be interesting. I've always thought of the fundamental theorem of calculus as "where you are is the sum of where you've been heading", but that probably requires more explanation to be very descriptive. 67.158.43.41 (talk) 20:15, 14 November 2010 (UTC)
The wordiness kind of throws me off, but I think I see. So calculus is no big deal and it's just about calculating things. Things that can help build things. I thought it would be somehow more impressive than that, but I guess that's all there is to it. All right, thanks. AdbMonkey (talk) 03:04, 15 November 2010 (UTC)
- In my experience, everything is less impressive when you understand it, from TV show plots to calculus. This helps explain why children are so much more excited than adults. 67.158.43.41 (talk) 04:15, 15 November 2010 (UTC)
- From a practical, "engineering" standpoint, calculus is just about calculating things to help build things, as you said. From a conceptual, "mathematical" standpoint, calculus gives us the tools in order to "do arithmetic involving infinity," roughly, which can mean infinitely big values, infinitely small values, or infinitely many steps in the calculation. These conceptual breakthroughs are what make calculus such a big deal in the mathematics world. —Bkell (talk) 05:08, 15 November 2010 (UTC)
Thanks, but I'm not reading any of those big sophisticated books that are more for people who easily understand the subject. I was looking more for a nice kindergarten version of what calculus is, as if it were in a children's book. I doubt, since no one said it helps improve your rhetorical style, logical fallacies and whatnot, that it would be useful to me. I was thinking logic= logos? Connection, right? :P Oh well. Thanks for assuring me that there is absolutely no way that knowing calculus will help improve your logical reasoning. This really makes me feel like I'm not missing anything important in life. AdbMonkey (talk) 03:05, 16 November 2010 (UTC)
- "Thanks for assuring me that there is absolutely no way that knowing calculus will help improve your logical reasoning." I strongly disagree. From your statements, I think a little first-order logic would be much more useful to you than calculus in this regard. It seems like all of the content you're really after. Some of that stuff actually can be explained adequately to a kindergartner too, IMO. 67.158.43.41 (talk) 03:47, 16 November 2010 (UTC)
THAT can be explained to a kindergartner? :( Thanks for the effort anyway. AdbMonkey (talk) 08:39, 16 November 2010 (UTC)
- Lewis Carroll wrote a book called Symbolic Logic that he thought was appropriate for young children. On the calculus side, there's also a book called Calculus for Cats that might appeal to you. —Bkell (talk) 15:54, 16 November 2010 (UTC)
- Formal treatment of calculus is known for requiring a particularly rigorous way of thought. It has many counterintuitive phenomena, and concepts which are transformed completely if you change a single detail. So learning calculus certainly can improve your logical reasoning, even if not directly.
- Calculus is the way the universe works. In particular, the most fundamental laws of physics are particular differential equations. I'd say that being completely oblivious to it is a pretty important thing to miss in life. -- Meni Rosenfeld (talk) 16:13, 16 November 2010 (UTC)
- To be clear, the first order logic page I linked is written formally. Informally, you could start describing first order logic by asking "if I have a chocolate in one of my hands and I show you it's not in my left hand, which hand is it in? How do you know?" 67.158.43.41 (talk) 22:06, 16 November 2010 (UTC)
I like the way that you explain it, .41. This is good palatable stuuff. Do you have any other stuff like your examples? Your examples only please. Not a scary book. AdbMonkey (talk) 22:16, 16 November 2010 (UTC)
- I'm sorry, I'm making them up off the top of my head. Similar examples would be models for easy first order logic proofs. To make more, one could translate the formal logic of an easy proof into English (a little like this section of the Boolean logic article). Above all the point is to avoid jargon. There must be books out there with a treatment of logic you'd like. A brief search found Logic for Dummies. I don't necessarily endorse it since I haven't read it, but I glanced through it on Google Books and from that brief inspection it seems like what you might be interested in. I do strongly recommend formal logic if you're interested in making good arguments: "practice makes perfect". 67.158.43.41 (talk) 21:11, 17 November 2010 (UTC)
Thank you very much. That's a lot of helpful advice. Thanks for answering. :) AdbMonkey (talk) 20:39, 18 November 2010 (UTC)
November 14
System of nonlinear equations (I think?)
How would I solve a system like this? (Not homework, just curious.)
I'm only in Pre-calc, but I guess this is a college-level algebra problem?
It would be easy if it weren't for those three darned variables in the denominators. 141.153.215.139 (talk) 21:51, 14 November 2010 (UTC)
- Actually that one happens to be easy: call 1/x = a, 1/y = b, 1/z = c, you end up with 3 equations in the 3 unknowns a, b, c. Solve & substitute back. No calculus needed. In general there is no standard method for systems of nonlinear equations. Some are solveable by algebra if you have the right idea, others simply aren't. If you cannot solve them by algebra, there is always Newtons method to get you an arbitrarily exact approximate solution (but that does require calculus). 86.147.175.50 (talk) 22:59, 14 November 2010 (UTC)
- Convergence of Newton's method is not at all guaranteed in general. My opinion is that nonlinear (perhaps differential) equations encode much of the universe's physics, at least to a good approximation, and so should be utterly horrific to solve in general. 67.158.43.41 (talk) 04:19, 15 November 2010 (UTC)
- (edit conflict) Substitute each of the variables as follows a = 1/x, b = 1/y and c = 1/z. Your problem becomes
- You can solve this using simple linear algebra. You have a = 52/29, b = 89/29 and c = 231/58. Invert to get the values of x, y and z, i.e. x = 29/52, y = 29/89 and z = 58/231. — Fly by Night (talk) 23:01, 14 November 2010 (UTC)
Systems of n algebraic equations in n unknowns may be solved systematically in two steps:
- Elimination of variables leading to n equations, each in 1 unknown.
- Each equation is solved numerically by a root-finding method.
Step 1 is exact while step 2 is approximate. So even if you didn't discover the above trick you can still solve the problem by doing some extra work. Bo Jacoby (talk) 10:29, 15 November 2010 (UTC).
- To Bo Jacoby: I don't believe that one. You can't generally eliminate variables from a system of equations with a symbolic method in such a way that you get equations in a single variable. – b_jonas 16:03, 16 November 2010 (UTC)
- I thought the same thing, except he said "algebraic" equations, which was left a bit vague. Taken weakly enough, it's correct, but I think that would have to be a non-standard interpretation of an "algebraic equation". 67.158.43.41 (talk) 22:00, 16 November 2010 (UTC)
I should have put algebraic equations in square brackets to link to the definition. Algebraic equations generalize systems of linear equations, and exceptions regarding dependence still exist. In a system of equations like x−y=0, xx−xy=0 the second equation does not provide new information, so even if there are apparently two equations, there are effectively only one. A system of equations like x−y=xx+yy−2=0 implies xx−1=yy−1=0 by step 1. Bo Jacoby (talk) 10:46, 17 November 2010 (UTC).
- For monomial terms in a single variable I see what you mean. How do you deal with mixed terms, like xy, in general? 67.158.43.41 (talk) 03:59, 18 November 2010 (UTC)
- Scratch that. I only see what you mean in a few degenerate cases. For instance, how does one apply elimination to xy+x^2+y^3 = x^2 y^4 + y^5 = 0? 67.158.43.41 (talk) 04:34, 18 November 2010 (UTC)
Let the system of equations be P1=P2=0 where P1 and P2 are polynomials in x and y. Then x2+xy+y3=x2y4+y5=0. One obvious solution is (x,y)=(0,0). To find other solutions set P2:=P2/y4. Then x2+xy+y3=x2+y=0. To eliminate y3, set P1:=P1−y2P2. Then x2+xy−x2y2=x2+y=0. Set P1:=P1/x. Then x+y−xy2=x2+y=0. To eliminate y2, set P1:=P1+xyP2. Then x+(x3+1)y=x2+y=0. To eliminate y from P1, set P1=P1 −(x3+1)P2. Then x−x5−x2=x2+y=0. Set P1=−P1/x. Then x4+x−1=x2+y=0. Solving x4+x−1=0 numerically and substituting x into y=−x2 gives the real solutions (x,y)=(−1.22074,−1.49022), (x,y)=(0.724492,−0.524889), and the complex solutions (x,y)=(0.248126+1.03398i, 1.00755−0.513116i), (x,y)=(0.248126−1.03398i, 1.00755+0.513116i), in addition to the trivial solution (x,y)=(0,0). Bo Jacoby (talk) 11:55, 18 November 2010 (UTC).
PS. http://www.wolframalpha.com/input/?i=xy%2Bx^2%2By^3+%3D+x^2+y^4+%2B+y^5+%3D+0 does not identify x=y=0 as an exact solution. Perhaps wolframalpha does not eliminate variables? Bo Jacoby (talk) 10:18, 20 November 2010 (UTC).
- I'm guessing that if there isn't a nice exact form for all solutions, it will give numerical forms for everything. Try it with a system that does have one. -- Meni Rosenfeld (talk) 16:01, 20 November 2010 (UTC)
November 15
(delta, epsilon)
How could I show that using delta-epsilon? L'Hopital's Rule is out of the question. 24.92.78.167 (talk) 02:16, 15 November 2010 (UTC)
- What definition of e^x are you allowed to use? There are quite a few--see characterizations of the exponential function for several. 67.158.43.41 (talk) 04:21, 15 November 2010 (UTC)
- Exactly. It's quite a tricky one this. If the OP wants a nice εδ-proof then s/he needs to show that for each real ε > 0 there exists a real δ > 0 such that for all x with 0 < | x | < δ, we have
- If you try to solve this inequality explicitly to give yourself an ε then you need the Lambert W function which, in this case, is as useful as a chocolate fire guard. The exponential function is an entire function, so we might as well jump to the power series instead:
- (Notice I changed from x to z, meaning I am working over the complex plane and not just the real line.) Doing a little bit of algebra shows us that
- The rest is straight forward, we find that
- One could apply the εδ-argument to the above power series, but there really is no point. — Fly by Night (talk) 21:01, 15 November 2010 (UTC)
- It should be mentioned that this argument relies on the fact that converges for all z in a neighborhood around 0. It's fairly clear that this is true since (ez-1)/z is defined and finite everywhere except at 0, but in general it's something to be careful about of when you want to reverse the order of infinite summation and taking a limit. Rckrone (talk) 01:46, 16 November 2010 (UTC)
- Thanks for mentioning that. I didn't mention it because the series has an infinite radius of convergence. But it was worth mentioning. Thanks Rckrone. — Fly by Night (talk) 16:15, 16 November 2010 (UTC)
- It should be mentioned that this argument relies on the fact that converges for all z in a neighborhood around 0. It's fairly clear that this is true since (ez-1)/z is defined and finite everywhere except at 0, but in general it's something to be careful about of when you want to reverse the order of infinite summation and taking a limit. Rckrone (talk) 01:46, 16 November 2010 (UTC)
- Exactly. It's quite a tricky one this. If the OP wants a nice εδ-proof then s/he needs to show that for each real ε > 0 there exists a real δ > 0 such that for all x with 0 < | x | < δ, we have
- It follows quite easily from the differential equation definition of the exponential function, i.e. and e^0 = 1. I didn't suggest this at first since the equivalences can be more involved than your question, but now that another proof has been given, I'll suggest mine. With the differential equation definition, we can take a 3-term Taylor series about 0 to get
- for each x for some using the Lagrange form of the remainder term as discussed in Taylor's theorem. The equation is then
- Since e^x is differentiable, it is continuous, so can be taken arbitrarily close to e^0 = 1 by taking x small enough to force small enough. So we may split up this three-term limit into a sum of three limits, and we may further split the final term into the product of two limits. This gives
-
- .
-
- If you're sadistic enough to use epsilon-delta the entire way instead of relying on other properties, you can unwind the epsilons and deltas used in the proofs of splitting limits over sums and products and starting the entire process with a delta found with the continuity of e^x which bounds . It doesn't seem at all helpful, though. 67.158.43.41 (talk) 03:40, 16 November 2010 (UTC)
Limit of [sin (x - sin x)] / x^3
Hi Reference desk
I tried solving lim x-->0 [sin (x - sin x)] / x^3 and I think I ended up over complicating things. I checked with a calculator the limit is supposed to be 1/6 but I keep on getting 1/2. Can anyone teach me how to do this?
Thanks! —Preceding unsigned comment added by 169.232.246.218 (talk) 07:55, 15 November 2010 (UTC)
- Do you know the MacLaurin series expansion for sin(x)? Since the denominator is x-cubed, you only need to expand the top up to x-cubed to see the answer. The other method is to use L'Hospital's rule, taking derivatives of the top and bottom until you get a limit you can evaluate. (In this case, that would be 3 derivatives.) I think the series expansion is a better method, and it certainly is faster than L'Hospital's rule for this problem. If you're stuck on any particular step, explain your work so far.140.114.81.55 (talk) 08:28, 15 November 2010 (UTC)
- The series for sin(x) is readily available; that for sin(x-sin(x)) not so much. To get it means differentiating, so one may as well use L'Hôpital's rule (by which I do get 1/6). —Tamfang (talk) 08:46, 15 November 2010 (UTC)
- No, you can find it with substitution. , so
- .
- -- Meni Rosenfeld (talk) 17:25, 15 November 2010 (UTC)
- No, you can find it with substitution. , so
- The series for sin(x) is readily available; that for sin(x-sin(x)) not so much. To get it means differentiating, so one may as well use L'Hôpital's rule (by which I do get 1/6). —Tamfang (talk) 08:46, 15 November 2010 (UTC)
http://www.wolframalpha.com/input/?i=sin(x-sin(x))+%2Fx^3 gives you the power series. If you want to do it by hand, substitute sin(x)≈x-x3/6, x-sin(x)≈x-(x-x3/6)=x3/6, sin(x-sin(x))≈sin(x3/6)≈x3/6, sin(x-sin(x)))/x3≈(x3/6)/x3=1/6. Bo Jacoby (talk) 10:39, 15 November 2010 (UTC).
How to know is a system of linear equations has an infinite number of solutions or none
I know that if a system of linear equations is expressed in the form Ax = y, where A is a square matrix, x is a column vector of variables and y is a column vector, then if det(A)=0 the system of linear equations has either an infinite number of solutions, or it has none. How do you know which one of these is the case (i.e. whether there are an infinite number of solutions, or no solutions)? 220.253.217.130 (talk) 08:35, 15 November 2010 (UTC)
- Hi, you can find the answer under the section "Determining the Number of Solutions of a Nonhomogeneous System of Equations"[3] with some examples. Hope this helps. ~ Elitropia (talk) 11:44, 15 November 2010 (UTC)
- Use Gaussian elimination to solve the system. If it fails to find a solution, there is no solution. If it finds a solution, you know that there is at least one solution, and therefore (because of det(A) = 0) there are infinitely many.—Emil J. 14:54, 15 November 2010 (UTC)
- If det(A)=0, I think Gaussian elimination leaves you with an identity (like 0=0) if there are infinitely many solutions and a contradiction (like 0=1) if there are none. Gandalf61 (talk) 15:06, 15 November 2010 (UTC)
- You'll only end up with a straight identity if everything is a solution. Otherwise you get a system containing identities and nonidentities, such as x-y=0, 0=0, giving the solution set {(t,t)}. Algebraist 15:12, 15 November 2010 (UTC)
- If det(A)=0, I think Gaussian elimination leaves you with an identity (like 0=0) if there are infinitely many solutions and a contradiction (like 0=1) if there are none. Gandalf61 (talk) 15:06, 15 November 2010 (UTC)
- (e/c) Not at all. Regardless of regularity of the system, Gaussian elimination gives you a modified linear system A'x = y' where A' is in a reduced row-echelon form. Then the system is solvable iff the entries in y' corresponding to the zero rows of A' are zero, and the algorithm tells you how to find the solution. This is the whole point of Gaussian elimination, that it works even for non-regular (or non-square, for that matter) linear systems, unlike e.g. Cramer's rule.—Emil J. 15:15, 15 November 2010 (UTC)
- An elegant characterization is that in this case, the system has solutions iff the augmented matrix, , has the same rank as A. Though I think the best way to actually find this out is to do Gaussian elimination as above. -- Meni Rosenfeld (talk) 17:21, 15 November 2010 (UTC)
Group theory questions
I have two questions: 1. If H is a subgroup of G containing all squares then H is normal. 2. If gcd(m,n)=1 and mth powers of G commute with each other and so do nth powers, then G is abelian. Thanks-Shahab (talk) 16:16, 15 November 2010 (UTC)
- Ad 1: hg = (g−1)2(gh)2h−1.—Emil J. 16:35, 15 November 2010 (UTC)
- Ad 2: let H be the subgroup generated by mth powers, and K the subgroup generated by nth powers. It follows easily from the assumptions that H and K are both abelian. Also, both are normal (in fact, characteristic) subgroups, and Bézout's identity implies that HK = KH = G. Thus, is abelian. Since , G is class-2 nilpotent (and, in particular, metabelian). I don't know how to continue the argument. I'm probably missing something basic.—Emil J. 17:48, 15 November 2010 (UTC)
A question on Riemann surfaces and spaces of germs
Hello everyone, I would really, -really- appreciate some detailed help on this question: I am taking a lecture course on Riemann Surfaces, and the lecturer has failed to explain how to approach a question like this at all unfortunately - plenty of theorems (monodromy, existence of lifts etc.) but nothing in the way of examples on how to approach a question like this.
Once I'm done with this problem I've got 4 or 5 more along very similar lines, and given that I'd like to try and get those done completely by myself following this, I would really appreciate as much detail as you can possibly give me. Obviously I will be using my own mathematical knowledge/intuition, but the more detail & help I can get from you for this one question, the more likely I am to grasp the concept for subsequent ones.
The question is as follows: "Show that the component of the space of germs over corresponding to the complex logarithm is analytically isomorphic to the Riemann surface constructed by gluing, and hence also analytically isomorphic to "
Now I believe I can choose a 'base point' somewhere in the surface constructed by the gluing and then use analytic continuation to extend this to any given point - I am also fairly sure that whatever path we choose to extend it, we should get the same end result presumably, because we need the map to be well-defined - I suspect this follows from the Monodromy theorem.
However, I don't know where to go from here. Obviously we can project the Riemann surface down onto by 'flattening it', but at the same time I believe the surface is simply connected (its complement is just {0} which is connected), and my thoughts are something along the lines of mapping from a point on the surface to the germ given by analytic continuation of the logarithm from our arbitrary fixed basepoint to the point on the surface - however, I'm uncertain as to how to formalize this argument, among other issues.
For one thing, how can a map be analytically isomorphic onto the space of germs? Aren't germs (as far as we've got in the course) considered to be essentially functions at a point under the equivalence relation f~g at the point x iff they are identical on some open neighbourhood of x? So how can something be analytic onto such a space? I think part of the issue is that I am being expected (sadly!) to hand this work in before the lecturer is completely done covering the topic - so how would you go about a problem like this? As much as the mathematical content, actually knowing -what- to write is unclear to me too, so if you would be able to provide me with an example of how something like this should be properly attacked, that would be incredibly helpful. (The next question, for example, discusses the situation with the function (z3-z)(1/2), and I would like to give that a go ASAP, so please respond if you can.) I hugely appreciate your response in advance, it is desperately needed! ;-) Typeships17 (talk) 20:26, 15 November 2010 (UTC)
- I've a feeling the question is referring to the fact that the Complex logarithm is not uniquely defined. So the space of Germ corresponding to the Complex logarithm is going to consist multiple elements, hence you can construct a space of all those germs. The question is a little unclear, it seems some words a re missing so its hard to tell. The picture lower down in Complex logarithm might also help.--Salix (talk): 23:46, 15 November 2010 (UTC)
- Yes, I believe that's exactly what it refers to, since we construct the Riemann surface by identifying the distinct branches of the logarithm and gluing them together - and I agree, the wording of the question is poor, but unfortunately that's how it was stated. I believe each point in should have a set of germs corresponding to , one for each 'sheet' of the construction. So how would you actually construct the analytic isomorphism, and show it is analytic and isomorphic then? The way I picture the glued surface is actually -already- as copies of , or rather as , but now it appears that is the structure of the space of germs - however, I find it hard to believe that my map is from (z,n) to the germ corresponding to , where - that seems almost -too- obvious in a way, and I suspect I've made an incorrect assumption which has trivialized a non-trivial question. Typeships17 (talk) 00:37, 16 November 2010 (UTC)
- I often find that the simplest questions are often the hardest. How about this: we have the space you can parameterise it as (r,θ) with , which is easy to show is just .--Salix (talk): 06:11, 16 November 2010 (UTC)
- But surely if , we can never hit zero, so that's a map onto the punctured complex plane instead of ? Typeships17 (talk) 10:45, 16 November 2010 (UTC)
- You could use real log function to rescale the the domain . --Salix (talk): 14:37, 16 November 2010 (UTC)
- But surely if , we can never hit zero, so that's a map onto the punctured complex plane instead of ? Typeships17 (talk) 10:45, 16 November 2010 (UTC)
- I often find that the simplest questions are often the hardest. How about this: we have the space you can parameterise it as (r,θ) with , which is easy to show is just .--Salix (talk): 06:11, 16 November 2010 (UTC)
- Yes, I believe that's exactly what it refers to, since we construct the Riemann surface by identifying the distinct branches of the logarithm and gluing them together - and I agree, the wording of the question is poor, but unfortunately that's how it was stated. I believe each point in should have a set of germs corresponding to , one for each 'sheet' of the construction. So how would you actually construct the analytic isomorphism, and show it is analytic and isomorphic then? The way I picture the glued surface is actually -already- as copies of , or rather as , but now it appears that is the structure of the space of germs - however, I find it hard to believe that my map is from (z,n) to the germ corresponding to , where - that seems almost -too- obvious in a way, and I suspect I've made an incorrect assumption which has trivialized a non-trivial question. Typeships17 (talk) 00:37, 16 November 2010 (UTC)
November 16
Maximize volume of a cone
Hi all,
How do you find out the angle that maximizes the volume of a cone using calculus?
More specifically, http://jwilson.coe.uga.edu/emt725/Class/Lanier/Cone/image6.gif
Given that R is constant, what angle (as shown in the diagram) maximizes the volume of the cone? Here is what I got so far:
- Arc length = (1-θ/2π)2πR
- Circumference of base = 2πr
Arc length = Circumference of base
- (1-θ/2π)2πR = 2πr
- r = (1-θ/2π)R
Height of cone
- h = sqrt(R^2 - r^2)
- h = R sqrt(1-(1-θ/2π)^2)
Volume of cone
- V = (1/3) π r^2 h
- V = (1/3) π (1-θ/2π)^2 (R^2) (R sqrt(1-(1-θ/2π)^2))
- V = (1/3) π R^3 (1-θ/2π)^2 sqrt(1 - (1-θ/2π)^2)
What do I do from here? I tried differentiating the function with respect to θ and let it equal to 0:
- 0 = (1-θ/2π)sqrt(1-(1-θ/2π)^2) + (1/2)(1-θ/2π)^3(1-(1-θ/2π)^2)^(-1/2)
How do I solve for θ and how do I justify that the θ I find gives the maximum volume?
Thanks in advance! —Preceding unsigned comment added by 169.232.246.52 (talk) 05:01, 16 November 2010 (UTC)
- The expressions will be simpler if you use the other angle (α=2π-θ) and convert afterward; and the derivative will be simpler if you square the volume. I'm just sayin'. —Tamfang (talk) 05:32, 16 November 2010 (UTC)
- I didn't check your work, but from your final equation you can substitute to get
- which is easy to solve--multiply by sin, divide by cos, and replace the sin^2 with 1-cos^2. Any extrema of a differentiable function on a closed interval occurs either at the endpoints or at a point where the first derivative is zero. So, check the volume at the extreme values of theta and at the value induced by the above equation. If the latter is the largest, it's the global maximum. 67.158.43.41 (talk) 08:29, 16 November 2010 (UTC)
- Using
- 0 = cosβ sinβ + (1/2) (cosβ)^3 / sinβ
- 0 = cosβ (sinβ)^2 + (1/2) (cosβ)^3
- 0 = cosβ (1 - (cosβ)^2) + (1/2) (cosβ)^3
- 0 = cosβ - (cosβ)^3 + (1/2) (cosβ)^3
- 0 = cosβ - (1/2) (cosβ)^3
- 0 = cosβ( 1 - (1/2) (cosβ)^2)
- First case
- cosβ = 0
- 1-θ/2π = 0
- θ = 2π
- Second case
- ( 1 - (1/2) (cosβ)^2)= 0
- cosβ = sqrt(2) / cosβ = -sqrt(2)
- 1-θ/2π = sqrt(2) / 1-θ/2π = -sqrt(2)
- θ = (1-sqrt(2)) (2π) / θ = (1 + sqrt(2)) (2π)
- Is this how I do it? —Preceding unsigned comment added by 169.232.246.199 (talk) 20:28, 16 November 2010 (UTC)
- Your second case is off; it should be |cos beta| = sqrt(2). I get theta = (1 +/- sqrt(2)) * 2pi, as does Wolfram Alpha (which I'd say you should only use to double-check your work). 67.158.43.41 (talk) 21:21, 16 November 2010 (UTC)
Homeomorphisms of the closed disk
Perhaps this is a dense question, but how do I go about seeing that all orientation-preserving homeomorphisms of the closed disk are isotopic to the identity map? It's obvious in the hand-wavey case of homeomorphisms which can be visualised by stretching/contorting the space in a continuous manner, but how do I know that all (orientation-preserving) homeomorphisms are of this type, and that there aren't some exotic ones which can't be isotop'd to the identity map? Thanks, Icthyos (talk) 16:16, 16 November 2010 (UTC)
- (I did of course mean to say orientation-preserving homeomorphisms from the closed disk to itself...) Icthyos (talk) 10:54, 17 November 2010 (UTC)
- See Alexander's trick (is having a 'trick' named after you ven better than a lemms?). I'll put a reference to this under isotopy. Dmcq (talk) 11:58, 17 November 2010 (UTC)
- Ah-hah, I see. Thanks! (I'd argue that calling something a 'trick' makes it seem less grand (and a bit more trivial) than a lemma - even if it's not...) Icthyos (talk) 12:14, 17 November 2010 (UTC)
- There was a discussion here about whether it was better to have a lemma named after you or a theorem ;-) Dmcq (talk) 12:59, 17 November 2010 (UTC)
I'm not altogether sure about the homeomorphisms where the boundary is not fixed and is a 4 sphere. Is this a part of the Generalized Poincaré conjecture? Dmcq (talk) 14:23, 17 November 2010 (UTC)
- I feel that question is a little over my head! I'm working with mapping class groups though, so I always assume that the boundary components are fixed, pointwise. On a related note, how does one go about showing that every orientation-preserving self-homeomorphism of the 2-sphere is isotopic to the identity? I thought about decomposing the sphere as two closed disks, joined along their boundaries, and using Alexander's trick simultaneously on both, but then I realised that would require the homeomorphism of the sphere to leave a great circle fixed, point-wise, which I can't see always being true. Is it possible to adapt the Alexander argument to see that the mapping class group of the 2-sphere is trivial, or is another insight needed? Thanks, Icthyos (talk) 19:55, 17 November 2010 (UTC)
Pi = 4
I'm so confused, was everything I was told about math a lie?
http://tinypic.com/r/27wufn/7 76.68.247.201 (talk) 16:50, 16 November 2010 (UTC)
- The problem is with the last step. How do you know that the limit of the cutting process gives a circle? Something is clearly special close to the points where the original square was tangent to the circle (at 12 o'clock, 3 o'clock, 6 o'clock and 9 o'clock). My imagination tells me that the limit will be a
hexagonoctagon and not a circle. — Fly by Night (talk) 17:02, 16 November 2010 (UTC)- No, the limit is a circle. The problem is with the implicit assumption that the length of the limiting curve is the limit of the lengths of the curves. No argument is given for this assumption and it is in fact false. Algebraist 17:06, 16 November 2010 (UTC)
- (edit conflict) Whoops, I meant octagon. — Fly by Night (talk) 17:07, 16 November 2010 (UTC)
- Interesting... Is there a link to show that the assumption is wrong? — Fly by Night (talk) 17:09, 16 November 2010 (UTC)
- The OP's original link shows this, since it proves a false conclusion from this assumption. You can do the same thing without the complication of the circle by considering instead the diagonal of the unit square. If we approximate this in the same way with curves that are everywhere either horizontal or vertical, then all the approximating curves will have length 2, but will converge uniformly to the diagonal. Algebraist 17:13, 16 November 2010 (UTC)
- Does that show that the lengths of the curves don't converge to length of the diagonal, or that the curves themselves don't converge to the diagonal? In the circle case you have a sequence of lengths (sn) with the property that sn = 4 for all n. It stands to reason that the limit of sn as n tends to infinity is 4; while the circumference of the circle is 2π ≠ 4. This would suggest to me that the limit of the zig-zags is not actually the circle. — Fly by Night (talk) 17:22, 16 November 2010 (UTC)
- The limit of the zig-zags is most definitely the circle. The original construction may be a bit messy to rigorously analyze, so here's a different construction: take a regular square grid with distance ε between the grid lines, and consider a zig-zag line on the grid which goes as close to the circle as possible. Its length will be still 4, but it will be within distance ε√2 of the circle. By taking a sequence of such zig-zags for εn → 0, you will get a sequence of curves of length 4 whose limit is the circle.—Emil J. 17:44, 16 November 2010 (UTC)
- Another example: let ck be the piecewise linear function going through the points (0,0), (1/2k2,1/k),(2/2k2,0),(3/2k2,1/k),(4/2k2,0),...,((2k2 − 1)/2k2,1/k),(1,0). Then ck converge uniformly to the line from (0,0) to (1,0) as k → ∞, but their lengths, bounded below by 2k, go to infinity.—Emil J. 17:53, 16 November 2010 (UTC)
- The limit of the zig-zags is most definitely the circle. The original construction may be a bit messy to rigorously analyze, so here's a different construction: take a regular square grid with distance ε between the grid lines, and consider a zig-zag line on the grid which goes as close to the circle as possible. Its length will be still 4, but it will be within distance ε√2 of the circle. By taking a sequence of such zig-zags for εn → 0, you will get a sequence of curves of length 4 whose limit is the circle.—Emil J. 17:44, 16 November 2010 (UTC)
- Does that show that the lengths of the curves don't converge to length of the diagonal, or that the curves themselves don't converge to the diagonal? In the circle case you have a sequence of lengths (sn) with the property that sn = 4 for all n. It stands to reason that the limit of sn as n tends to infinity is 4; while the circumference of the circle is 2π ≠ 4. This would suggest to me that the limit of the zig-zags is not actually the circle. — Fly by Night (talk) 17:22, 16 November 2010 (UTC)
- The OP's original link shows this, since it proves a false conclusion from this assumption. You can do the same thing without the complication of the circle by considering instead the diagonal of the unit square. If we approximate this in the same way with curves that are everywhere either horizontal or vertical, then all the approximating curves will have length 2, but will converge uniformly to the diagonal. Algebraist 17:13, 16 November 2010 (UTC)
- Interesting... Is there a link to show that the assumption is wrong? — Fly by Night (talk) 17:09, 16 November 2010 (UTC)
- It's some very funny stuff this; if you ask me. It just doesn't feel right. Each successive zig-zag curve touches the circle in a finite number of points. But in the limit, it touches at all points. So a sequence of discreet points has a limit of the whole space. But the cardinality of a finite, discreet set is different to the line. I guess the fact that the circle is compact plays a role here. In a compact space, every sequence has a convergent subsequence; but we're asking that every point be an adherent point of the limit of the sequence. Consider the map g : Z → S1 given by g(n) = (cos(n), sin(n)), is g(Z) = S1? It's very much like space filling curves. — Fly by Night (talk) 18:05, 16 November 2010 (UTC)
- I don't understand why it doesn't feel right to you. The curves obviously approach the limiting curve as , but of course each successive curve doesn't intersect the limiting curve at all. —Bkell (talk) 19:58, 16 November 2010 (UTC)
- Yeah, after thinking about it again, your topological arguments don't make sense. It isn't the sets of points of intersection whose limit is the circle; the circle is the limit of the zigzag curves. As another example, consider the curves for . The limiting curve here is plainly , but the set of intersection points at every stage is the same set . To answer your question ("is g(Z) = S1?"), the answer is no: the set g(Z) is countable, while the set S1 is uncountable. For a specific example of a point in S1 which is not in g(Z), consider the point (−1, 0). —Bkell (talk) 20:59, 16 November 2010 (UTC)
- What is true is that g(Z) is dense in S1. —Bkell (talk) 21:01, 16 November 2010 (UTC)
- (edit conflict) By giving the height and depth of each "step" appropriate values, you can fit the staircase as closely as you like to the graph of any montonic function between the points (-1/2,0) and (0,1/2), thus "proving" that the length of any such curve is 1. Gandalf61 (talk) 17:16, 16 November 2010 (UTC)
- Don't you mean 1/√2? 76.68.247.201 (talk) 18:14, 16 November 2010 (UTC)
- No, he doesn't. Algebraist 18:26, 16 November 2010 (UTC)
- Don't you mean 1/√2? 76.68.247.201 (talk) 18:14, 16 November 2010 (UTC)
- How silly of me, for some reason I was using Pythagoras...76.68.247.201 (talk) 19:39, 16 November 2010 (UTC)
- I couldn't find an article about this fallacy, Mathematical fallacies has nothing like it. I'm pretty certain I've seen things like this in reliable sources and it is notable. Any idea if it is covered somewhere else? Dmcq (talk) 17:47, 16 November 2010 (UTC)
- It reminds me of something I read about a long time ago. All of these curves lie outside the circle and they all have greater lengths and internal areas than the circle. This gives upper bounds. Then you start with a square inside the circle, with the four vertices lying on the circle. You add little squares, and then some even smaller squares, etc. You get a zig-zag inside the circle whose length and internal area are lower bounds for the circle. It's quite an old idea. — Fly by Night (talk) 17:55, 16 November 2010 (UTC)
- You can do the same zigzag on the inside of the circle, again with length 4. Thus this doesn't give a lower bound. If you want a lower bound you need to take a piecewise linear curve all of whose corners lie on the circle, as described in arc length#Definition. Algebraist 18:26, 16 November 2010 (UTC)
- I'm sure you know exactly what I was trying to get at. You inscribe a 4-gon, 5-gon, 6-gon, … to give a lower bound and you circumscribe a 4-gon, 5-gon, 6-gon, … to give an upper bound. — Fly by Night (talk) 18:39, 16 November 2010 (UTC)
- Algebraist, I don't think a similar rectilinear zigzag on the inside of the circle would have length 4. In the simplest case, a square inscribed in a circle of diameter 1, we have a perimeter of . In general, such a zigzag would have length , where are the minimum and maximum x- and y-values attained by the curve, which must lie strictly within the interval (−1/2, 1/2), assuming the circle is centered at the origin. So, as the inside zigzag approaches the circle more and more closely, its length will increase to a limit of 4. (So of course you're right that it doesn't provide a lower bound.) —Bkell (talk) 19:48, 16 November 2010 (UTC)
- Oh, true. I was thinking of a different and rather silly zigzag. Algebraist 19:50, 16 November 2010 (UTC)
- You can do the same zigzag on the inside of the circle, again with length 4. Thus this doesn't give a lower bound. If you want a lower bound you need to take a piecewise linear curve all of whose corners lie on the circle, as described in arc length#Definition. Algebraist 18:26, 16 November 2010 (UTC)
- It reminds me of something I read about a long time ago. All of these curves lie outside the circle and they all have greater lengths and internal areas than the circle. This gives upper bounds. Then you start with a square inside the circle, with the four vertices lying on the circle. You add little squares, and then some even smaller squares, etc. You get a zig-zag inside the circle whose length and internal area are lower bounds for the circle. It's quite an old idea. — Fly by Night (talk) 17:55, 16 November 2010 (UTC)
Alright, I think I'm starting to understand it, but I don't yet completely...is this anything like the coastline of england? 76.68.247.201 (talk) 18:14, 16 November 2010 (UTC)
- Similar; except without the rain and the wind. Another article that you might like is Koch snowflake. It's a curve of infinite length that fits into a finite area. Check it out. There are some really nice pictures. — Fly by Night (talk) 18:23, 16 November 2010 (UTC)
- How is it similar? They're both about approximating arc lengths using piecewise linear curves, but I can't see any more connection than that. Algebraist 18:26, 16 November 2010 (UTC)
- I didn't say they (the snow flake and the zig-zag) were similar, nor connected. Simply that the OP might like the article. There's no need to be so cantankerous. Posts should add to the discussion, and not just take away from what others write. This post was totally unrelated to anything I wrote; you just wanted to have a go. Please stop. — Fly by Night (talk) 18:39, 16 November 2010 (UTC)
- I know you didn't. You suggested the case at hand is similar to the coastline of England, by which the OP presumably meant the coastline paradox as described in the famous article How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension. I'd like to know what similarity you see there. Please do not invent motivations for my actions. Algebraist 18:44, 16 November 2010 (UTC)
- I don't invent; I have a body of knowledge to demonstrate. — Fly by Night (talk) 19:02, 16 November 2010 (UTC)
- I know you didn't. You suggested the case at hand is similar to the coastline of England, by which the OP presumably meant the coastline paradox as described in the famous article How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension. I'd like to know what similarity you see there. Please do not invent motivations for my actions. Algebraist 18:44, 16 November 2010 (UTC)
- I didn't say they (the snow flake and the zig-zag) were similar, nor connected. Simply that the OP might like the article. There's no need to be so cantankerous. Posts should add to the discussion, and not just take away from what others write. This post was totally unrelated to anything I wrote; you just wanted to have a go. Please stop. — Fly by Night (talk) 18:39, 16 November 2010 (UTC)
- How is it similar? They're both about approximating arc lengths using piecewise linear curves, but I can't see any more connection than that. Algebraist 18:26, 16 November 2010 (UTC)
- Wow, I've seen my fair share of immaturity on the internet, but I guess I was naive to think that a question on a wikipedia reference desk would be immune to it. I'm not a Wikipediean myself, so perhaps I'm not in a position to comment, but I think you two should resolve your bickering elsewhere. It's juvenile.
- The reason I brought up the coastline paradox is that I saw an analogy between the pi = 4 "proof" and fractals (although, my knowledge of fractals is very poor, so I expect I'm in the dark about this). Working off what Emil J said, as the length ε of each step decreases, so does the distance between each step and the circle (ε/√2). As ε --> 0, this distance tends to zero as well. But because the total number of steps increases as well, the two end up canceling each other out, so that there still remains a net difference, in a sense, between the perimeter of the circle and the perimeter of the staircase.
- My understanding of the coastline paradox is something as follows: If there existed a true length for the coastline of Britain (with a corresponding true coastline of Britain), adding a single grove to it, like a rock, would have a negligible effect on the total length of the coastline. But if small rocks were placed all over the coastline, there would be a finite addition to the length of the coastline, and hence it is really impossible to measure the coastline of Britain, because every speck of sand matters.
- There appears to be an analogy between the two, although I might be way over my head. 76.68.247.201 (talk) 19:39, 16 November 2010 (UTC)
- The perimeter at any point in the operation is constant at 4; I don't understand what you mean is canceling what. Emil's construction just shows more clearly than the original that the circle is approximated arbitrarily well by this type of operation, but that its perimeter is not. 67.158.43.41 (talk) 21:34, 16 November 2010 (UTC)
Silly off-topic humour
|
---|
|
By the way something nasty related to this happens in the Calculus of variations called the Lavrentiev phenomenon where one might find that a route that zig zags back and forth gives a better result than a smooth route along the same path. Dmcq (talk) 10:39, 17 November 2010 (UTC)
- This paradox (mostly the straight-line case) is discussed in Riddles in Mathematics by Eugene P. Northrop, where he incorrectly asserts that the limiting shape is not the same as the line being estimated. (This is by no means the only error in the book.) AndrewWTaylor (talk) 14:44, 17 November 2010 (UTC)
By the way, it is possible to turn the paradox into a working definition. Namely, consider the definition of the length of a curve by rectification as in Arc length#Definition, but use the L1 metric instead of the Euclidean metric. Then the length of the line segment from (x,y) to (u,v) is |x − u| + |y − v|, the length of the unit circle is 4, and in general, the length of any curve can be found by approximating it with zig-zags (using horizontal and vertical lines only).—Emil J. 15:18, 17 November 2010 (UTC)
- Interesting, are there any areas of math where this is useful? 76.68.247.201 (talk) 01:02, 18 November 2010 (UTC)
Question about a Clock
Hi Wikipedia,
I have a question that I couldn't solve:
'The big hand and the small hand on a clock meet at noon and then again a little over an hour later. What’s the exact time difference between these two events?'
Analytically this wouldn't be a hard problem at all, but when I was asked to use differential calculus to solve this I simply had no idea.
Does anyone have any idea how to do this?
Thanks! —Preceding unsigned comment added by 169.232.246.199 (talk) 18:36, 16 November 2010 (UTC)
- The problem doesn't have anything to do with differential calculus as far as I can see. The equations are already linear so using a linear approximation doesn't simplify them.--RDBury (talk) 18:57, 16 November 2010 (UTC)
- You don't need any sort of calculus. An easy observation is that the big and small hands meet at regular intervals. Just count how many times they meet between noon and midnight (excluding one of the endpoints), and divide 12 hours by that.—Emil J. 18:59, 16 November 2010 (UTC)
- You should just use vectors. Imagine the hour hand and the minute hand as vectors of equal length that move around the unit circle. The hour hand makes one revolution per hour, and so assuming that we start at midnight the hour hand can be written as
- where t is measured in hours. The minute hand rotates sixty times per hour and so
- If they point in the same direction then h and m will be linearly dependent vectors. If they point in opposite directions they will be too, but we will sort that out. The vectors h and m are linearly dependent if and only if
- We can solve this using the periodicity of the sine function:
- If n is even then the hour and minute hands point in the same direction; if n is odd then they point in opposite directions. It follows that we want
- This gives you the time t, in hours, when the minute and hour hands point in the same direction. The times after midnight are given by
- I hope this helps. — Fly by Night (talk) 20:09, 16 November 2010 (UTC)
- Fly, you might want to watch a clock carefully sometime. You write "The hour hand makes one revolution per hour" and "The minute hand rotates sixty times per hour", but actually an hour hand makes one revolution every 12 hours and the minute hand makes one revolution every hour. You seem to have "proved" that the hour hand and the minute hand point in the same direction 59 times every hour, which of course is nonsense. —Bkell (talk) 20:44, 16 November 2010 (UTC)
- …Of course, what you write is true if the problem is changed to consider the minute hand and the second hand. —Bkell (talk) 20:48, 16 November 2010 (UTC)
- (edit conflict) Any need for the comment "you might want to watch a clock carefully sometime"? How does that help anyone?! The method is perfect, some of the constants are wrong. It seems that I proved when the second hand and the minute hand point in the same direction. Why not change them yourself and give the OP a nice answer? Why is the maths reference desk full of people that tell people their answers are wrong without offering their own correct solutions? It's just a drain on people's energy. I'm done with this page. Have fun everyone. — Fly by Night (talk) 20:50, 16 November 2010 (UTC)
- Wow, I'm sorry I offended you. I meant it as a friendly ribbing. —Bkell (talk) 21:10, 16 November 2010 (UTC)
- And the reason I didn't "offer my own correct solution" is because EmilJ had basically given a correct solution above, which I didn't feel any need to reiterate or improve upon. —Bkell (talk) 21:14, 16 November 2010 (UTC)
- Because the information that a certain answer is wrong is useful to all parties involved. If we didn't correct wrong statements, we'd turn into [enter name of an internet forum full of crap here]. Same goes if we implemented some stupid code of honor which says you are only allowed to comment if you also give a full solution to the OP's problem. -- Meni Rosenfeld (talk) 10:15, 17 November 2010 (UTC)
- (edit conflict) Any need for the comment "you might want to watch a clock carefully sometime"? How does that help anyone?! The method is perfect, some of the constants are wrong. It seems that I proved when the second hand and the minute hand point in the same direction. Why not change them yourself and give the OP a nice answer? Why is the maths reference desk full of people that tell people their answers are wrong without offering their own correct solutions? It's just a drain on people's energy. I'm done with this page. Have fun everyone. — Fly by Night (talk) 20:50, 16 November 2010 (UTC)
- A contrived solution with differential calculus is to consider the hands as vectors parametrized by time. The local minima of the distance between the vectors are precisely the points at which they overlap. 67.158.43.41 (talk) 21:39, 16 November 2010 (UTC)
- How would I model it using vectors parametrized by time then? —Preceding unsigned comment added by 169.232.246.107 (talk) 04:07, 17 November 2010 (UTC)
- The vector for each hand is just where the end of the hand is at that time, so for the hour hand you could use (sin(2πt/12h), cos(2πt/12h)) and for the minute hand (sin(2πt/1h), cos(2πt/1h)). I would suggest maximizing the dot product of the two vectors, rather than minimizing the distance since things will work out much nicer that way. Still it's much easier to do what Emil J. suggested than to consider calculus at all. (In fact even this method can get you the answer without calculus if you recognize the right trig identity.) Rckrone (talk) 04:30, 17 November 2010 (UTC)
- How would I model it using vectors parametrized by time then? —Preceding unsigned comment added by 169.232.246.107 (talk) 04:07, 17 November 2010 (UTC)
- This method is quite similar to Fly by Night's above (with errors corrected), but motivated by calculus instead of linear algebra. I agree Emil's solution is the most elegant on the page. However, the OP wanted a method using differential calculus, saying "Analytically this wouldn't be a hard problem at all", to me indicating they were capable of creating their own solution, just not with calculus. 67.158.43.41 (talk) 21:21, 17 November 2010 (UTC)
If t is time in minutes after noon, this problem can be reduced to solving t = t/12 + 60. This equation is derived by noticing that the hour hand moves one-twelfth as fast as the minute hand and by recalling the condition that the minute hand is one revolution ahead of the hour hand. 720/11 minutes after noon on a clock face. —Anonymous DissidentTalk 12:13, 17 November 2010 (UTC)
- The hour hand and minute hand first coincide at (5 and 5/11) minutes after 1 o'clock, at which time the second hand is at (27 and 3/11) seconds. Am I right in my belief that at no time in the 12 hours do all three hands coincide, except at 12 o'clock?→86.132.164.178 (talk) 10:54, 18 November 2010 (UTC)
Sums of squares.
(Sorry for the vague section title: can't think of anything appropriate.) If I have N things, and I can group them any way I like, eg. (XXX)(XX)(XXXXX), N=10. Once I've done this, S is the sum of the squares of the items in each group, in this case 9+4+25=38. [You'll notice the number of groups, and their order, is not important.] I supposed that for a given N and S, such a make-up might be unique. However, through mostly guesswork I found the case S=14, N=8:
- Three lots of (NN), and two lots of of (N)
- One lot of (NNN), and five lots of (N)
How common a property is it that a given number can be expressed in two ways? Since I imagine this property becomes more common as N increases, are there any examples of three ways, or more, for N≤100? As an additional point, is there any easy way of finding a solution for given S,N? Thanks, - Jarry1250 [Who? Discuss.] 21:25, 16 November 2010 (UTC)
- A related conjecture is Euler's sum of powers conjecture which you may be interested in, and Euler net has computed a fair amount in relation to that conjecture. For the k=2 case which the additional constraint that the left and right bases sum to the same number, I don't know anything more than you, sorry. 67.158.43.41 (talk) 21:49, 16 November 2010 (UTC)
- (edit conflict) For a given value of N, the smallest possible S is N and the largest possible S is N2. Moreover, N and S must have the same parity (that is, they are both even or both odd)—to see this, consider any partition of your N things; if N is even, then the number of lots that contain an odd number of things must be even, and so an even number of the squares in your sum are odd, which means S is even (a similar argument applies when N is odd). So the number of possible values of S for a given value of N is approximately . (This count is actually off by 1, because I committed a fencepost error in my counting.) Now, the number of possible ways to partition your N things into lots is given by the partition function p(N), which is approximately . As N gets larger, this exponential function grows much more quickly than the quadratic function , so your intuition is correct that the property you refer to becomes more common as N increases. For N = 100, the number of possible values of S is , while the number of ways to partition 100 items into lots is p(100) = 190,569,292. So, by the pigeonhole principle, there is some value of S for which there are at least 38,492 different solutions! —Bkell (talk) 21:50, 16 November 2010 (UTC)
- For any integer n > 1 there are two distinct partitions of n2 + n with the same sum of squares:
- Then you can add m lots of 1 to each partition to create two partitions of n2 + n + m with the same sum of squares. Your example is the case n=2, m=2. By expressing any integer k ≥ 6 in the form n2 + n + m (which can be done in roughly different ways) this allows us to find pairs of partitions of k with the same sum of squares. Gandalf61 (talk) 17:23, 17 November 2010 (UTC)
November 17
Function
Is it correct (or rigorous, I should say) to say that functions are not in fact rules (defined by elementary functions such as log, power, multiplication, addition, etc., for a random example f(x)=3x^2+ln(x)) but infinite collections of points, which on certain intervals can sometimes be generated by a rules? I'm supposed to be doing a presentation project and this would help clear a certian point up. 24.92.78.167 (talk) 01:04, 17 November 2010 (UTC)
- No, I would say that neither statement is rigorous. A function need not have a rule that we can easily write down; when there is a such a rule, we would say it is an elementary function. (The idea of elementary functions is not actually that useful for most mathematicians; and when it is used, may give different lists for the building blocks used to construct elementary functions. The article gives a reasonable definition, but not the only possible one.) Your second idea would more often be called piecewise elementary functions. (The idea is that if we take pieces of elementary functions and glue them together, we get a new function.)
- The rigorous definition of a function is fairly simple, but abstract (which means it won't seem simple to anyone not comfortable with abstract language). You seem to be thinking about functions from (some subset of) the set of real numbers to the set of real numbers, which is the kind of function most people think about. (But you can define functions from any set to any other set.) This Math.Stackexchange discussion gives the rigorous definition, and several nice ways to explain it. I really like the "function monkey" idea. 140.114.81.55 (talk) 02:18, 17 November 2010 (UTC)
- The reason I say neither statement is rigorous is that there are many functions that don't fit either definition. There are functions so messy that no finite description with the language we have can give the "rule" for the function. Suppose you find a black box lying on the street some day. You can give it any real number, and it gives you back a real number. It doesn't matter if anyone ever understands the pattern of how the black box works: as long as it is consistent with itself, it is a function. (I.e., it always gives you the same result any time you give it 1, or pi, or -24.32325.)
- One other comment: when you say "an infinite sequence of points," it's not quite clear to me what you mean. "Sequence" usually means a countable list, like {1,2,3,4,5,...}. If you are thinking of something like all real numbers from 0 to 1, I wouldn't call that a sequence because it's not a countable set.140.114.81.55 (talk) 02:32, 17 November 2010 (UTC)
oops, sorry I didn't mean all functions. I fixed it to say certain intervals.—Preceding unsigned comment added by 24.92.78.167 (talk) 02:41, 17 November 2010 (UTC)
- The rigorous definition of function that I learned is, IIRC, this: A function from set to set is defined as a subset of satisfying that for each there's precisely one element of whose first term is . Not a rule, not a monkey, not a black box, just a subset of a product. (Then you need to know the definition of . IIRC that can most readily be defined as the set of where ranges over and over ; the "first term" referred to above is then the in .)—msh210℠ 06:29, 17 November 2010 (UTC)
- I think that should be , so that it contains exactly one singleton set and one doubleton (unless a = b) See Ordered pair. AndrewWTaylor (talk) 14:34, 17 November 2010 (UTC)
- Not necessarily. See Ordered pair#Variants.—Emil J. 14:41, 17 November 2010 (UTC)
- Right, AndrewWTaylor, sorry. The "first term" is then anyway.—msh210℠ 16:46, 17 November 2010 (UTC)
- I think that should be , so that it contains exactly one singleton set and one doubleton (unless a = b) See Ordered pair. AndrewWTaylor (talk) 14:34, 17 November 2010 (UTC)
ring problem
Hey guys. Just a quick question with a problem on rings. Here's the problem I need to solve/prove:
For any arbitrary element x of a given ring <R, +, •>, it satisfies x•x=x. Prove that for any arbitrary element x, x+x=0
A simple enough problem, yet I'm stuck and can't go any further. The problem is that all I know about rings is from the little I could find on wikipedia. Sure, I've learnt about operators and identities and inverses at school (I'm currently in 11th grade), but that's about all I know in this field. Anyways, despite my laughable amount of knowledge on rings, I managed to reach this conclusion: if we denote e as the additive identity of this ring, then x+x=e for any arbitrary element x.
(Here's how I reached this conclusion: Since x•x=x, we can say that (x+x)•(x+x)=(x+x). But since the distributive law holds, (x+x)•(x+x)= x•x+x•x+x•x+x•x = x+x+x+x. Therefore, x+x=x+x+x+x. Hence, x+x=e.)
After that, I tried to prove that e=0, but to no avail. So my question is this: in this particular problem (or generally speaking), is 0 denoted as the additive identity, or the actual real number 0?
And if 0 isn't the additive identity, how can I go about finishing off this proof?Johnnyboi7 (talk) 14:33, 17 November 2010 (UTC)
- 0 denotes the additive identity in this context.—Emil J. 14:36, 17 November 2010 (UTC)
- Use your common sense, how can 0 be the real number 0? It's not necessarily in the set R. This is called abuse of notation, and as you progress in math you'll find many authors abuse notations to the limit. Money is tight (talk) 19:36, 17 November 2010 (UTC)
- Ah, see how unfamiliar I am with the concept of rings here? I actually thought that the set R denoted the set of real numbers R, since the problem didn't specify what R was. And so naturally, my initial guess was that 0 denotes the real number 0. My bad. Now I can see that the set R is called R in reference to the "Ring" it is related to. Thanks EmilJ and Money is tight.Johnnyboi7 (talk) 03:19, 18 November 2010 (UTC)
- In general (though certainly not always) the real numbers are while an arbitrary ring is just R. 67.158.43.41 (talk) 04:10, 18 November 2010 (UTC)
- Ah, see how unfamiliar I am with the concept of rings here? I actually thought that the set R denoted the set of real numbers R, since the problem didn't specify what R was. And so naturally, my initial guess was that 0 denotes the real number 0. My bad. Now I can see that the set R is called R in reference to the "Ring" it is related to. Thanks EmilJ and Money is tight.Johnnyboi7 (talk) 03:19, 18 November 2010 (UTC)
- Use your common sense, how can 0 be the real number 0? It's not necessarily in the set R. This is called abuse of notation, and as you progress in math you'll find many authors abuse notations to the limit. Money is tight (talk) 19:36, 17 November 2010 (UTC)
Integer
Is 0 an integer? —Preceding unsigned comment added by 24.92.78.167 (talk) 22:32, 17 November 2010 (UTC)
- Yes. Algebraist 22:33, 17 November 2010 (UTC)
- Zero is always an integer and an even number. It is neither positive nor negative. The only debatable category is whether or not it is a natural number. Dbfirs 08:46, 18 November 2010 (UTC)
- Well, it depends on which zero you mean. Arguably the zero of the real numbers is a distinct object from the zero of the integers or natural numbers, even though in most cases it's expedient to elide the distinction. --Trovatore (talk) 09:58, 18 November 2010 (UTC)
- I've also seen the convention that zero is both positive and negative, rather than neither. I don't think that's very common, though. Algebraist 12:09, 18 November 2010 (UTC)
- I think that's the usual rule in French. I suppose there's nothing inherently wrong with it. I hope it doesn't diffuse into English, though; I wouldn't like to have to start doubly disambiguating, saying strictly positive on the one hand or positive or zero on the other. --Trovatore (talk) 19:45, 18 November 2010 (UTC)
- There are already common words for "positive or zero" and "negative or zero": nonnegative and nonpositive. So I don't think there need be much fear of such diffusion.—msh210℠ 20:53, 18 November 2010 (UTC)
- It can happen. Mathematicians taught in French publish in English. People translate from Bourbaki. --Trovatore (talk) 22:06, 18 November 2010 (UTC)
- There are already common words for "positive or zero" and "negative or zero": nonnegative and nonpositive. So I don't think there need be much fear of such diffusion.—msh210℠ 20:53, 18 November 2010 (UTC)
- I think that's the usual rule in French. I suppose there's nothing inherently wrong with it. I hope it doesn't diffuse into English, though; I wouldn't like to have to start doubly disambiguating, saying strictly positive on the one hand or positive or zero on the other. --Trovatore (talk) 19:45, 18 November 2010 (UTC)
- I've also seen the convention that zero is both positive and negative, rather than neither. I don't think that's very common, though. Algebraist 12:09, 18 November 2010 (UTC)
- Not the only debatable category, Dbfirs. Also debatable is whether it's a whole number.—msh210℠ 20:53, 18 November 2010 (UTC)
- Well, whole number is not really a mathematical term. In the United States (and for all I know maybe elsewhere) it's used in mathematics education, but not in mathematics itself. --Trovatore (talk) 22:05, 18 November 2010 (UTC)
- Well, it depends on which zero you mean. Arguably the zero of the real numbers is a distinct object from the zero of the integers or natural numbers, even though in most cases it's expedient to elide the distinction. --Trovatore (talk) 09:58, 18 November 2010 (UTC)
- Zero is always an integer and an even number. It is neither positive nor negative. The only debatable category is whether or not it is a natural number. Dbfirs 08:46, 18 November 2010 (UTC)
Flavor
Is chocolate a flavor? —Preceding unsigned comment added by 128.62.81.128 (talk) 23:08, 17 November 2010 (UTC)
- This is the mathematics reference desk, and the question is not a mathematical one. Bo Jacoby (talk) 23:42, 17 November 2010 (UTC).
- No.—msh210℠ 01:59, 18 November 2010 (UTC)
November 18
A financial math problem
Dear Wikipedians:
I've worked out part (a) of the following question, but feels that part (b) is more difficult and I'm not sure how to proceed
A manufacturer finds that when 8 units are produced, the average cost per unit is $64, and the marginal cost is $18. (a) Calculate the marginal average cost to produce 8 units (b) Find the cost function, assuming it is a quadratic function and the fixed cost is $400.
So my solution for (a) is
when q = 8,
for part (b), I know that the final form of the function looks something like
where a0 is equal to 400. But I don't know how to get the other coefficients from the given information. So I need your help.
Thanks,
70.29.24.19 (talk) 01:33, 18 November 2010 (UTC)
- You have since you have . You also have . 67.158.43.41 (talk) 04:22, 18 November 2010 (UTC)
- Thanks! Got it! 70.29.24.19 (talk) 04:38, 18 November 2010 (UTC)
Simple extremal subgraph query
Hi, is there an explicit function in (n,s) for the maximal number of edges a graph on n vertices may have such that the degree of every vertex is less than s? WLOG assuming ns - I thought perhaps it was to do with the floor function, certainly you can obtain , by just splitting the vertices up into classes of size at most s. However, is this necessarily an upper bound? What is the form of such a function? Estrenostre (talk) 04:39, 18 November 2010 (UTC)
- The upper bound is if every vertex has degree exactly s-1, in which case you've got n(s-1)/2 edges. Obviously if n and s-1 are both odd, that value is not possible since it's not integer, so it needs to be . I'm pretty sure this value is obtainable. Rckrone (talk) 11:01, 18 November 2010 (UTC)
- It is indeed. Let d = s − 1. If d is even, define a graph on {0,...,n − 1} by connecting each vertex a by an edge with (a + i) mod n, where i = −d/2,...,−1,1,...,d/2. If d is odd, use the same vertex set, connect a with (a + i) mod n for i = −(d + 1)/2,...,−2,2,...,(d + 1)/2, and moreover, connect each odd a with a − 1 (and therefore each even a < n − 1 with a + 1).—Emil J. 13:09, 18 November 2010 (UTC)
November 19
Question about polynomial rings
I'm kind of suck on a question where I have to prove that a common divisor of the highest degree of two arbitrary elements f(x) and g(x) in the polynomial ring, lets call it h(x). I have to prove that the greatest common divisor, d(x) (which is defined as a monic polynomial) is an associate of h(x). So far I'm trying to argue that since h(x) is a common divisor of the highest degree. What property should I use to show that they have the same degree? —Preceding unsigned comment added by 142.244.143.26 (talk) 02:44, 19 November 2010 (UTC)
poles problem
I saw one of those puzzles online today, I'll try to describe it. There are two poles of equal height with pegs at the same position on each of them. A rope of length 10 m is hung on these pegs such that the lowest point on the rope is 5 m below the pegs in the y-direction. How far apart are the poles? Intuitively I thought it would be 5 m because 10-5=5. I'd like to know how to do this with the arc length integrals and stuff to check my answer. How would I? Is my answer right? Thanks. 24.92.78.167 (talk) 04:09, 19 November 2010 (UTC)
- You don't need any arc lengths. Just draw a picture. 69.111.192.233 (talk) 05:32, 19 November 2010 (UTC)