Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
April 4
Calculating Probabilities while playing "Lets Make a Deal.
While playing Lets Make a Deal, you are shown 3 doors. Behind one door is a car, but there are goats behind the other two doors. You pick door #1. Then the host shows you that there is a goat behind door #3. He then asks you if you want to keep door #1, or change to door #2. Should you stay with your first pick, or pick the other door. Which has a higher probability of winning the car? — Preceding unsigned comment added by Psychoshag (talk • contribs) 03:29, 4 April 2012 (UTC)
- We have an article on the Monty Hall problem, which will likely tell you more than you want to know. --Trovatore (talk) 03:30, 4 April 2012 (UTC)
I want to thank you for this link. It indeed has a lot of information, to which I found very useful. Thanks again! — Preceding unsigned comment added by Psychoshag (talk • contribs) 06:20, 5 April 2012 (UTC)
there are 4 kinds of topology in Set{1,2},and 29 in Set{1,2,3}, is exist a Formula in a Set {1,2,3,4,5,6...n}?
there are 4 kinds of topology in Set{1,2},and 29 in Set{1,2,3}, is exist a Formula in a Set {1,2,3,4,5,6...n}? — Preceding unsigned comment added by Cjsh716 (talk • contribs) 06:26, 4 April 2012 (UTC)
- The number of topologies on a set with n labelled elements is sequence A000798 at OEIS. But I can't see anything there that suggests there is a known formula. Gandalf61 (talk) 09:26, 4 April 2012 (UTC)
thank you! — Preceding unsigned comment added by Cjsh716 (talk • contribs) 04:27, 5 April 2012 (UTC)
Evaluation of Residue using Analytic Continuation
Starting from the Euler integral representation of the Gamma Function, I have derived the expression and have to use this to find the residue of the Euler integral at z=-m, m an integer. From the way the question is worded, I don't think this should be a difficult task but I haven't evaluated residues in this manner before and need some help in finding the correct approach. Thanks. meromorphic [talk to me] 09:52, 4 April 2012 (UTC)
- The integral is holomorphic at , so only the term in the summation contributes to the residue. Sławomir Biały (talk) 12:21, 4 April 2012 (UTC)
- (At the risk of asking a potentially obvious question...) So the residue is just ? meromorphic [talk to me] 14:24, 4 April 2012 (UTC)
- No, it's just (-1)^m/m! Sławomir Biały (talk) 14:39, 4 April 2012 (UTC)
- Ah, I'm with you now. Many thanks. meromorphic [talk to me] 14:44, 4 April 2012 (UTC)
- No, it's just (-1)^m/m! Sławomir Biały (talk) 14:39, 4 April 2012 (UTC)
Bounding a tricky series
I'm trying to show the following series is O(log(log(x))) for all x > e.
So far, I have tried, separately: (1) Euler-Maclauren summation, (2) expressing the summand using Vieta's formula [1], (3) writing the sine factor as an exponential and looking for telescoping terms in the series, and (4) trying to relate the sum to an entire function whose order necessarily contains a loglog term (see [2]) . I get the feeling I'm closest with the first two, but I must be missing something.
After that, I'd like to show the existence of a strictly increasing sequence (converging to zero) such that , where c is a positive constant independent of n. I wasn't sure how to approach this, but it reminded me of the construction of an entire function with prescribed zeros.
But now I am stumped. Any pointers would be greatly appreciated. Korokke (talk) 10:53, 4 April 2012 (UTC)
- I'm not sure if it helps, but writing transforms the summation into
- and the problem is now to show that this is O(y) for . Sławomir Biały (talk) 12:27, 4 April 2012 (UTC)
- Hint: Decompose the sum to two parts, one bounded by a constant and one bounded by the harmonic series. -- Meni Rosenfeld (talk) 17:37, 4 April 2012 (UTC)
Evaluation of an Integral using Special Functions
I have to evaluate the integral using the substitution x=2t-1, which transforms the integral to and any standard properties of special functions that I like. Now, given the limits, I expect that the special function I want to use is the hypergeometric function with a=0, b=1/3 and c=1 but I cannot see the direct link. I'm tempted to say that I want to use the relation but I am not least confused by the lack of the variable z in the integral I want to compute. Any ideas? Thanks. meromorphic [talk to me] 11:15, 4 April 2012 (UTC)
- I've no idea if what you are doing is the best approach, but surely you can just use that relation with z=1? 130.88.73.65 (talk) 13:27, 4 April 2012 (UTC)
- Thanks. I realised that using the relation , which in this case leads to , and then making the appropriate rearrangements and using the reflection formula for the gamma function leads to the answer. meromorphic [talk to me] 14:21, 4 April 2012 (UTC)
Billiard ball accident
Instead of a car being rear-ended, with its complicating factors like friction and crumple zones, suppose there is a stationary billiard ball A with mass ma in a frictionless vacuum, and suppose it receives a direct hit at time t=0 from billiard ball B with mass mb and pre-collision velocity v>0. Suppose both balls are perfectly hard, totally incompressible.
(1) Am I right that regardless of the relative masses and regardless of B's prior velocity, A will move forward and henceforth have a positive velocity?
(2) For any t>0, am I right that the post-collision velocity of A is constant? If so, then must there be a discontinuity in its velocity at time t=0 -- that is, a moment of infinite acceleration?
(3) If the mass of B is much greater than the mass of A, then it seems to me that the post-collision velocity of B remains positive; but if the mass of B is sufficiently small then the post-collision velocity of B must be negative. Is this right? If so, then what relationship between the relative masses and the original velocity of B characterizes the in-between situation in which B ends up with zero velocity? Duoduoduo (talk) 15:37, 4 April 2012 (UTC)
- Yes, all three observations are correct (in an ideal no friction no compression situation). In case of a full-blown collision, i.e. if B initially moves exactly in A's direction, B will end up with zero velocity if and only if A and B have equal masses, regardless of the initial velocity of B. This follows from conservation of momentum (m1*v1 + m2*v2 = constant) and conservation of kinetic energy (0.5*m1*v1^2 + 0.5*m2*v2^2 = constant). All kinetic energy and momentum of B will be transferred to A, so A will move forward with whatever velocity B had before. -- Lindert (talk) 16:10, 4 April 2012 (UTC)
- For part 2, it's quite obvious that we can't have infinite acceleration, even in theory, as that would then require infinite force or zero mass, accoding to F = ma. So, this means that the billiard balls actually deform a bit as they collide, and accelerate rapidly over some short time period, as they rebound from this deformation. StuRat (talk) 16:24, 4 April 2012 (UTC)
- We can definitely have infinite acceleration in theory, specifically a Dirac delta. -- Meni Rosenfeld (talk) 17:32, 4 April 2012 (UTC)
- It is useful to consider the frame of reference of the center of mass in collisions. In this frame, each ball will have its velocity negated after the collision. If the masses are equal, B's speed is v/2, so the change in his velocity is of magnitude v, meaning he will come to rest in the original frame. -- Meni Rosenfeld (talk) 17:32, 4 April 2012 (UTC)
Creating a minimum spanning tree with arbitrary points allowed - name for this problem?
Years ago, I took a Discrete Mathematics course, in which the professor discussed what was he said was an unsolved problem. I don't know if he was right, or if the problem has been solved since then, and I want to find any information on the problem... but I can't remember the terminology to even do a search.
Basically, consider a 2d plane (or 3d, or nd, but for now 2d) on which are a set of points. The problem is to create a minimum spanning tree that includes those points-- but! it can include arbitrary additional points if it will help reduce the tree. For example, consider the case with three points-- if you only include those points and have links between only those, than the minimum spanning tree would be a triangle minus a leg. But it may be able to decrease the tree size by having the three points you must include connect to a fourth arbitrary point in the center of the three instead.
This problem is useful when trying to create, for example, a transportation network that minimizes road length while still connecting everything. Or for circuit board design. Anyone know what this problem is called? Fieari (talk) 19:42, 4 April 2012 (UTC)
- See Steiner tree problem. -- Meni Rosenfeld (talk) 19:58, 4 April 2012 (UTC)
- That's the one! Thank you so much! Fieari (talk) 20:06, 4 April 2012 (UTC)
- You're welcome. -- Meni Rosenfeld (talk) 06:51, 5 April 2012 (UTC)
- That's the one! Thank you so much! Fieari (talk) 20:06, 4 April 2012 (UTC)
April 5
Series
Let's say you have a function, and you can find anti-derivatives of any order. (The example that I have in mind if .) Next, you sum all of these anti-derivatives to give, hopefully, a new function. In the case of you get
Is there a name for this kind of construction? Can anyone point me towards any interesting references? — Fly by Night (talk) 01:55, 5 April 2012 (UTC)
- If f is differentiable then σ satisfies the first order differential equation σ' - σ = f'. Rckrone (talk) 04:36, 5 April 2012 (UTC)
- Anti-derivatives are not unique and therefore neither will be your resulting . I guess you are implicitly assuming initial conditions such as for any n. I've never encountered this before.Widener (talk) 07:17, 5 April 2012 (UTC)
- Good point, and the solution to that differential equation is , which seems to give the desired answer for .--Itinerant1 (talk) 09:12, 5 April 2012 (UTC)
- I think you must mean, when . I just tested it with using the example Fly By Night gave. . This is what you get if you assume for a general sigma. Widener (talk) 10:32, 5 April 2012 (UTC)
- Good point, and the solution to that differential equation is , which seems to give the desired answer for .--Itinerant1 (talk) 09:12, 5 April 2012 (UTC)
- That's right, I'd be setting all of the constants of integration equal to zero. After all:
- When we find the anti-derivatives of a function, we get a function plus an arbitrary polynomial, e.g.
- If we work out all of the anti-derivatives and then sum, we get a class of functions:
- It's the leading term in [σ] that I'm interested in, i.e. the class member corresponding to the zero power series (0 ∈ R[[x]]). — Fly by Night (talk) 11:25, 6 April 2012 (UTC)
Define a mapping on a suitable function space (say , although you can do this with spaces of measures too) by
You want to compute the resolvent operator (by the sum of the geometric series). A concrete formula for this is possible using the Fourier transform:
(This may be up to a constant like . I didn't keep careful track of delta functions when computing this.) Sławomir Biały (talk) 12:35, 6 April 2012 (UTC)
April 6
Notation question
I need to find a compact way of signifying that a variable is constructed by one or more derivatives with respect to some other variable. To qualify with an example, I would like to distinguish
from
.
I have come up with various conventions, only to discover that they were already used for other things. Is there a convention for this? Thanks!--Leon (talk) 12:40, 6 April 2012 (UTC)
- In the second example, z depends not just on x but on the derivative of x with respect to u. Is that the issue here? Sławomir Biały (talk) 12:43, 6 April 2012 (UTC)
- Well, in part. Perhaps I should provide more examples.
- .
- I want to notationally distinguish between all of these in a concise manner. Is there an accepted way of doing this?--Leon (talk) 12:54, 6 April 2012 (UTC)
- Take a look at the Total derivative article. What you're actually doing is composing functions and then applying the chain rule. You can see derivatives, and indeed differential expressions, as functions defined on a jet space. You're just composing a function from the jet space with another function. Why not just make these domains and these compositions explicit? — Fly by Night (talk) 12:59, 6 April 2012 (UTC)
- In the first case you'd write , in the second , in the third . I'm not sure what the fourth one means. — Fly by Night (talk) 13:09, 6 April 2012 (UTC)
- Yes, I am composing functions and applying the chain rule. But the reason I don't want to write this out explicitly each time is that the functions are not always as simple as these. I chose these examples for the sake of clear illustration alone. Here are some different examples, using a function of two variables , which I need not define.
- .
In response to Fly by Night's comment, I have amended my notation. Each is simply a reparametrization of the corresponding .
I would like to distinguish these concisely. Any ideas?--Leon (talk) 13:28, 6 April 2012 (UTC)
- I feel like my original question has not been adequately addressed. In each of these expressions, on the left hand side you have the x variable, and on the right you have derivatives of x. Do you intend that derivatives of x can be expressed as functions of x? More examples will not help. You need to say what you mean by these equations. Sławomir Biały (talk) 13:40, 6 April 2012 (UTC)
- Yes, I do intend that the derivatives of with respect to either or can be expressed as functions of either or . Thus the first two expressions are functions of and the latter two functions of . and are smooth functions, and their inverses always exist and are smooth also.--Leon (talk) 13:54, 6 April 2012 (UTC)
- This will only make sense if you have a particular function x in mind. Otherwise, as variables the x on the left and x on the right signify different things. On way to resolve such difficulties is to use differential notation. Sławomir Biały (talk) 14:25, 6 April 2012 (UTC)
- Another approach would be to stop using variables altogether, and write everything in terms of function composition, inverse functions, and so forth (which I think is like what FbN is suggesting you do.) Sławomir Biały (talk) 14:31, 6 April 2012 (UTC)
- Yes, I do intend that the derivatives of with respect to either or can be expressed as functions of either or . Thus the first two expressions are functions of and the latter two functions of . and are smooth functions, and their inverses always exist and are smooth also.--Leon (talk) 13:54, 6 April 2012 (UTC)
- I would write
- .
- or
- .
- Dmcq (talk) 14:50, 6 April 2012 (UTC)
- I would write
You contrast
versus
- .
This comes up a lot in economics. In the former equation, x is viewed as the ultimately exogenous variable, while in the latter x is itself determined by a more fundamental variable u. It seems common to me to write the first equation as shown, since dy/dx is a function of x (hence z is a function of x), but to rewrite the left side of the second equation as g(u) since (a) it's a different function and so should not be given the same name z and (b) g is the chain rule product (dy/dx) as a function of x(u) times (dx/du) as a function of u -- so g is actually a function of u, not of x. Duoduoduo (talk) 18:03, 6 April 2012 (UTC)
- Ordinarily, I would entirely agree with Duoduoduo. They are not the same functions, and it is dangerous to suggest that they are. However, perhaps if I provide some additional context, my motives for giving them similar names should become clear.
- (Some of) the functions I am looking at evaluate the kinetic energy of parametric motions, with some parameter, which may or may not be the argument of the function, being regarded as time. The function mapping the argument of the function to the parameter regarded as time is always monotonically increasing and continuous. I want to distinguish different parametrizations of the (otherwise) same motion. By different parametrizations, I mean both (a) what variable the motion is parametrized in (the argument of the function) and (b) what parameter is being regarded as time.
- So, is there a convention for doing this? --Leon (talk) 11:30, 7 April 2012 (UTC)
- One way is to write a little subscript giving the parameter. You can see something like this for instance in Planck's law where the spectral radiance is expressed for various different parameters like or Dmcq (talk) 12:08, 7 April 2012 (UTC)
- There are two ways looking at this. One is the language of functions and function values, and the other is the language of variables and equations. The notation f'(x) refers to functions: f and f', and to the function value f'(x). The Leibniz notation dy/dx refers to variables: x,y,dx,dy. Above you have mixed the two languages and so you are in trouble. The endless discussion about the interpretation of differentials is waste of time. Take it formally and axiomatically. If x is a variable, then so is the differential dx. The rules are:
- d(x+y)=dx+dy
- d(xy)=dxy+xdy (where dxy always means (dx)y=dx·y)
- dx=0 iff x is constant
- dx is constant iff x is an independent variable, (and so ddx=d2x=0).
- Now you can differentiate!
- d(x2)=d(xx)=dxx+xdx=2xdx
- Another example.
- x=at2/2+bt+c
- dx=dat2/2+atdt+dbt+bdt+dc.
- d2x=d2at2/2+2datdt+adt2+atd2t+d2bt+2dbdt+bd2t+d2c.
- If a,b,c are constants, and t is independent, you have
- d2x=adt2
- Your two examples are: zdx=dy and wdu=dy. Bo Jacoby (talk) 07:14, 7 April 2012 (UTC).
- Leon says: The function mapping the argument of the function to the parameter regarded as time is always monotonically increasing and continuous. I want to distinguish different parametrizations of the (otherwise) same motion. In your equations
- you could use one combined notation: where u is time. Then you could write dy/dx for the one, and dy/du, equivalently (d/y/dx)(dx/du), for the other. Duoduoduo (talk) 16:11, 7 April 2012 (UTC)
April 7
what is open subgroup in Group theory?
what is open subgroup in Group theory? — Preceding unsigned comment added by Cjsh716 (talk • contribs) 07:03, 7 April 2012 (UTC)
- A subgroup of a topological group that is an open set. Sławomir Biały (talk) 13:48, 7 April 2012 (UTC)
Expert opinion on methods required for proof of Goldbach's Conjecture
I don't know if anybody here would necessarily be 'tuned in' enough to give a good answer, but I am wondering if those mathematicians knowledgeable about related work believe that proof of Goldbach's conjecture (probably the paramount example of an unproven but 'known' to be true mathematical fact) will require essentially new methods or could theoretically just be an extraordinarily difficult problem in applying methods already in use. If the former, I would also be interested in knowing if opinions are out there as to how much of a departure from what is in use--degree of difficulty of imagination, let's say--may be required, just marginal tweaking or phenomenal ingenuity. If the latter, I wonder if simple but very strong versatility in computer programming--along with full knowledge of prior mathematical work--might be expected to crack the problem (and might be needed to do so because of number and difficulty of requisite computations).173.15.152.77 (talk) 11:07, 7 April 2012 (UTC)
- I am guessing, but I would think the answer is somewhere between "essentially new methods" and "an extraordinarily difficult problem in applying methods already in use". If current methods could be used, it would have already been done; but my guess is that some insightful advance on existing methods might do the trick. My (uninformed) guess would be that this is how Wiles proved Fermat's Last Theorem -- insightful advances on existing methods, bordering on new methods. Anyone have a different perspective on that?
- Incidentally, I would disagree with the characterization of the Goldbach Conjecture as " 'known' to be true" in the sense that everyone thinks the probability of its truth is extremely close to 100%. You never know about these things -- sometimes the first counterexamples to number theory assertions of the non-existence of anything with a certain property turn out to be very very large counterexamples. For example, from Pythagorean triple#Elementary properties of primitive Pythagorean triples:
- There exist infinitely many Pythagorean triples with square numbers for both the hypotenuse c and the sum of the legs a+b. The smallest such triple[10] has a = 4,565,486,027,761; b = 1,061,652,293,520; and c = 4,687,298,610,289. Here a+b = 2,372,15922 and c = 2,165,01722. This is generated by Euclid's formula with parameter values m = 2,150,905 and n = 246,792.
- Similarly, the smallest known counterexample to the previously conjectured impossibility of w4 + x4 + y4 = z4 is (95800, 217519, 414560; 422481). So sure, we've checked a very large number of numbers looking for counterexamples to Goldbach's Conjecture; but that still represents essentially 0% of all possible numbers -- the smallest 0%! Duoduoduo (talk) 16:36, 7 April 2012 (UTC)
- Also you ask if simple but very strong versatility in computer programming--along with full knowledge of prior mathematical work--might be expected to crack the problem (and might be needed to do so because of number and difficulty of requisite computations). If the conjecture is false, it strikes me that would be at least as likely to be proven with a computer-found counterexample as with a non-computer-based proof by contradiction. On the other hand, if the conjecture is true, it's hard to see how this could be proven with a computer program. But maybe a proof could go something like this: Someone shows that every even number is in exactly one of N categories, defined in such a way that which category a number is in could be established via computer in polynomial time, and they show that either all or none of the numbers in any category are Goldbach counterexamples. Then we could use clever programming to find one example of a number in each category, and check to see if it's a counterexample. This strikes me as similar to how the four color theorem was proven -- it was proven that there are only a certain finite number of types of know to check, and then a computer was used to check them all. Duoduoduo (talk) 20:46, 7 April 2012 (UTC)
- There is a link in the article about verifying the Goldbach conjecture up to 4*10^18 with a brute-force computer search. It keeps getting more and more convincing as you go (every single even integer up to 4*10^18 can be represented as a prime pair with one of the primes <10,000). Basically, if we have good heuristics that says that the probability for any given number N to be a counterexample is X(N), and the sum of X(N) from a certain N_0 to infinity is much less than 1, that's a strong indication that the hypothesis is true.--Itinerant1 (talk) 21:01, 7 April 2012 (UTC)
- But then the question is "What are good heuristics?" Heuristics are really just extrapolations from what we already know. If the conjecture is true, the heuristics may well make sense. But if the conjecture is false, the heuristics are meaningless because they fail to take account of whatever makes the conjecture self-contradictory. Duoduoduo (talk) 21:22, 7 April 2012 (UTC)
- On the other hand, the Goldbach conjecture article contains this diagram. Itinerant1 (or anyone), do you know whether it is true that there are no known cases of a number being below the bottom edge of the bottom band? If so, that looks like there's likely to be a causal pattern. But if there were even one exception, that would deflate the impressiveness of the diagram. Duoduoduo (talk) 23:08, 7 April 2012 (UTC)
- I am actually the OP on this question (on a sub-standard browser). I do have to disagree with you, Duoduoduo. Of course it is true that proof is always required for anything to be established as irrefutably true in mathematics, but much much less believable things are routinely assumed true for heuristic purposes, contingently of course, but genuinely assumed true. And I don't mean in the way that the Riemann hypothesis is assumed true, and new results are contingently caveated to be dependent on a likely but not fairly certain fact. The reason that I asked the question was because the problem is HARD, but the thing is that I suspect myself that answering it specifically is not so much of a 'holy grail' problem with other results depending upon it or a massive enough financial motivation behind it that somebody would necessarily have built the right tools and mixture of different areas of expertise as a specialist, even if it would be close to routine--but massively time consuming--to complete a proof using pretty much just older methods. I would imagine that if it were a matter of primarily needing to have a computer do something extraordinarily lengthy, as with the Four color theorem, that it would be considerably more difficult also to program, graph theory being essentially an already in-between subject area to computer science. Now, the Goldbach conjecture itself, I really think is quite clearly known to be true, as much as anything unproven is ever in mathematics, and the real mathematical questions are "What on Earth will it take to prove it?!" and, then, "How can the asymptotic range for number of representations be narrowed further than >0?" You may disagree, but find me then at least one person very knowledgable about both the Goldbach conjecture and another unproven (non-axiomatic, of course) but pretty much assumed to be true mathematical statement if I am really to have any reason to withdraw my opinion. If all that you are saying is that proof is always required, well, yeah, I get that.Julzes (talk) 02:44, 8 April 2012 (UTC)
- Well, there's a good reason that a proof is always required: because without a proof, it might not be true. There only has to be one counterexample for Goldbach's conjecture to fail to be true (I'm assuming there isn't any theoretical reason why there has to be more than one counterexample; feel free to correct me if I'm wrong here), and while that may seem very unlikely according to the heuristics and the evidence from the first few numbers tried, it hasn't actually been proved impossible. --COVIZAPIBETEFOKY (talk) 03:51, 8 April 2012 (UTC)
- Sure, but there's plenty of evidence. The evidence that GC is true would be more than sufficient to establish, say, a physical theory, to the point that no one would openly doubt it except as an intellectual exercise.
- I think the reason that people demand a higher standard of proof in mathematics than in physics is mainly that it's just not available in physics. But that's not really a very good reason. The natural conclusion really is that GC is true; we just don't know how to prove it. --Trovatore (talk) 05:03, 8 April 2012 (UTC)
- Yes, but what is the evidence for GC? Perhaps more the partial results, heuristic arguments, the simplicity of the statement, and the way it fits in with other conjectures, combining to create a gut feeling it is true, and perhaps less the numerical evidence. You might have to go up to 40 digit numbers to find a counterexample for Mertens conjecture, but that one exists is proven. IIRC, that one was thought to be too much to be true even before 1985, so gut feeling was proved right. But number theory is tricky.John Z (talk) 10:51, 8 April 2012 (UTC)
- Nothing to do with size of first counterexample per se. It's the probabilistic argument that I find extremely convincing. --Trovatore (talk) 17:42, 8 April 2012 (UTC)
- Yes, but what is the evidence for GC? Perhaps more the partial results, heuristic arguments, the simplicity of the statement, and the way it fits in with other conjectures, combining to create a gut feeling it is true, and perhaps less the numerical evidence. You might have to go up to 40 digit numbers to find a counterexample for Mertens conjecture, but that one exists is proven. IIRC, that one was thought to be too much to be true even before 1985, so gut feeling was proved right. But number theory is tricky.John Z (talk) 10:51, 8 April 2012 (UTC)
- Well, there's a good reason that a proof is always required: because without a proof, it might not be true. There only has to be one counterexample for Goldbach's conjecture to fail to be true (I'm assuming there isn't any theoretical reason why there has to be more than one counterexample; feel free to correct me if I'm wrong here), and while that may seem very unlikely according to the heuristics and the evidence from the first few numbers tried, it hasn't actually been proved impossible. --COVIZAPIBETEFOKY (talk) 03:51, 8 April 2012 (UTC)
- Julzes here again (same problem, common for me these days on weekends). While this discussion on the need for a proof specifically as applied to the Goldbach conjecture might have some value beyond that there is always such a need (If people feel so, then by all means continue, and perhaps I will chime in) in determining which of two opposing mathematical propositions is true, it does not approach the question I was trying to ask (and now realize I muddied by soft-lining the need for proof--or refutation). Let me briefly re-pose that question, and then if nobody here can respond I may report a reasonable answer myself sometime (a long time,most likely) after this has been archived. What I was essentially asking bears upon the question of groupthink among mathematicians in a way, a different way than that of which I perceive myself perhaps being accurately accused. As it would certainly garner some substantial attention and other benefits within the society of mathematicians to completely solve the problem, I was wondering if it still might not be the case that the problem is already on the verge of being proved (not something I at all have ideas of doing myself; I just think I might be able to gather what is the expert opinion I want without Herculean efforts). The question would be, then, if those who have looked at the proof of Chen's theorem and related material and said A) "The work and computations needed to go that last step along these paths would be absolutely mind-boggling and could never realistically be done by human beings for such an obviously worthless fact's proof just for its own sake by one or a few sane people seeing all the related and more interesting stuff there is right now to be done in related areas of mathematics" or B) "We really will have to wait for new 'stuff' for this one to be cracked if it ever will be." If it is A, though, perhaps C is now more accurate, that being C) "A good programmer knowing this sort of subject matter exquisitely well could have the programming done in half a year, a result in half a day, and a write-up, then, in another half month." Now, I suspect it may be hard to find anybody who can give a great reliable answer to this, but I asked because I am not sure. No offense to those holding the hard-line on proof and digressing from my question because of my own failing in wording things initially to avoid the debate, but answering me--not necessary, of course--requires answering the question differently than so far has been done. I am capable of taking the hard line on the need for proof myself, and disagreement here on which is right does not appear terribly expert (Expert, specialized, opinion being my question really--It is a problem that has certainly had a lot of people weigh in over the centuries, and some even recently; so, I had hopes somebody might be conversant in this).173.15.152.77 (talk) 13:51, 8 April 2012 (UTC)
- I for one was not responding to your question, to which I really have no informed comment to give, but to the intervening discussion. --Trovatore (talk) 19:32, 8 April 2012 (UTC)
- I am actually the OP on this question (on a sub-standard browser). I do have to disagree with you, Duoduoduo. Of course it is true that proof is always required for anything to be established as irrefutably true in mathematics, but much much less believable things are routinely assumed true for heuristic purposes, contingently of course, but genuinely assumed true. And I don't mean in the way that the Riemann hypothesis is assumed true, and new results are contingently caveated to be dependent on a likely but not fairly certain fact. The reason that I asked the question was because the problem is HARD, but the thing is that I suspect myself that answering it specifically is not so much of a 'holy grail' problem with other results depending upon it or a massive enough financial motivation behind it that somebody would necessarily have built the right tools and mixture of different areas of expertise as a specialist, even if it would be close to routine--but massively time consuming--to complete a proof using pretty much just older methods. I would imagine that if it were a matter of primarily needing to have a computer do something extraordinarily lengthy, as with the Four color theorem, that it would be considerably more difficult also to program, graph theory being essentially an already in-between subject area to computer science. Now, the Goldbach conjecture itself, I really think is quite clearly known to be true, as much as anything unproven is ever in mathematics, and the real mathematical questions are "What on Earth will it take to prove it?!" and, then, "How can the asymptotic range for number of representations be narrowed further than >0?" You may disagree, but find me then at least one person very knowledgable about both the Goldbach conjecture and another unproven (non-axiomatic, of course) but pretty much assumed to be true mathematical statement if I am really to have any reason to withdraw my opinion. If all that you are saying is that proof is always required, well, yeah, I get that.Julzes (talk) 02:44, 8 April 2012 (UTC)
Perimeter that is tangent to a circle
This is not homework problem. Imagine there are 5 points in two-dimensional space. They are, and their coordinates are as follows:
- A=(0, -u)
- B=(-s, 0)
- C=(0, t)
- D=(DX, DY)
- O=(0,0)
and:
- u is the distance between O and A
- s is the distance between O and B
- t is the distance between O and C
- RA is equal to u+t and is the radius of an arc from D to C
- RB is the radius of a circle with B as iits center
- D is a point that lies on the perimeter of the aformentioned circle
- DX=-s-RB*u/(s*sqrt(u2/s2+1))
- DY=RB*u/(s*sqrt(u2/s2+1))
- θ=arctan(s/u) and is the angle formed by D, A, and C
Given s, t, RB, how do I find DX, DY, u, RA, and θ? --Melab±1 ☎ 19:44, 7 April 2012 (UTC)
- You must specify u (or θ, or something else). In other words, your problem is underconstrained, and its answer depends on the value of u or equivalently where Point A is; or some other specified parameter.
- You have five unknowns. You specified three. You provided one constraint (that the distance from Point D to the center of the circle at (0, t-u) is fixed). You have one remaining free parameter. Nimur (talk) 20:07, 7 April 2012 (UTC)
- I want to determine the size of an arc that is tangent to the circle formed by B. The arc starts at C and is drawn counter-clockwise to where it will just touch the circle formed by B. The distance from B to O is fixed as well. --Melab±1 ☎ 20:38, 7 April 2012 (UTC)
Decidability and Goldbach's Conjecture
One time I saw this line of reasoning on an article's talk page, but I can't find it now. It went like this:
The Goldbach conjecture cannot be undecidable, because if it's false it's decidable by showing a counterexample, and so if it's undecidable then it's true and hence decidable.
It seems to me that this is wrong for the following reason -- if someone could sort this out for me it would help me understand undecidability better. It seems to me that undecidability means undecidability within the confines of a particular axiomatic system (the number system in this case?); so the Goldbach conjecture might be undecidable within that framework but might still be decidably true in a wider logical framework. Does this make sense? Is it conceptually possible for the Goldbach conjecture to be undecidable? Duoduoduo (talk)
- The thing you have to keep in mind is that undecidability is not a truth value. Undecidability is always relative to some particular formal theory. Truth, on the other hand, is just truth.
- If GC is undecidable in, say, Peano arithmetic, or even Robinson arithmetic (basically, any formal theory capable of proving all true quantifier-free sentences of arithmetic), then GC is really genuinely true. However, that doesn't mean that it's necessarily provably true in any particular named formal theory (other than, e.g., ones that take GC as an axiom). --Trovatore (talk) 23:33, 7 April 2012 (UTC)
- To clarify, that quote is wrong, although I've seen trained mathematicians fooled by the same reasoning in the past. I've also seen them try to argue that therefore if it's undecidable, it cannot be proven undecidable. This is also wrong.
- Another way of looking at it would be that there are two notions of truth: true in every model; and true in the standard model. That comment is confusing the two. Undecidable means (by Godel) true in some models and false in others. If it's false in the standard model, then it's decidable by showing a counterexample. But it might be true in the standard model and false in others.--121.74.125.218 (talk) 22:15, 8 April 2012 (UTC)
- I think the phrase "true in the standard model" is an excessively wordy way of saying "true". In a language that has a clear intended interpretation (such as the language of arithmetic), true is just true; it doesn't need to be relativized to anything. Provability and decidability, on the other hand, are always relative to some particular theory. Your truth in "other" models is also relative to some particular theory (otherwise, "model" of what, exactly?) --Trovatore (talk) 22:54, 8 April 2012 (UTC)
- Whereas I think "true" is a short way of saying "true in every model". And of course this is all relative to a theory.--121.74.125.218 (talk) 23:32, 8 April 2012 (UTC)
- No, "true" does not mean "true in every model". "True in every model" is the same as "provable", which is definitely not the same as "true". --Trovatore (talk) 00:58, 9 April 2012 (UTC)
- Clearly I think true and provable are the same (not definitionally; they're equivalent by Godel). I'm not prepared to commit to the existence of a standard model, whatever that means, so using "true" to mean "true in the standard model" is discomforting. Of course, I can still reason about a standard model in certain cases, for example analyzing the standard model of PA when my metatheory is ZFC.--121.74.125.218 (talk) 13:46, 9 April 2012 (UTC)
- Well, you're just using the word wrong. True does not mean provable. True means true. The sentence "2+2=4" is true just in case 2 plus 2 really is 4; it has nothing to do with whether you can prove it.
- If the natural numbers do not really exist, then sentences of arithmetic are simply meaningless rather than true or false. They can still be provable or refutable, but this does not suddenly license you to start calling that true or false. --Trovatore (talk) 17:30, 9 April 2012 (UTC)
- There can be reasonable cases where the longer 'True in the standard model' is clarifying, but it is ultimately a redundancy except where one is comparing truth in one model non-standard and one standard. If the question is well-posed, it is either true or false aside from such a case (which nevertheless may arise).Julzes (talk) 17:49, 9 April 2012 (UTC)Specific, literally labeled models ('standard' and 'non-standard'), that is.Julzes (talk) 17:53, 9 April 2012 (UTC)
- I don't disagree with that. Most of the time, though, I think it's a bad idea to add "in the standard model", especially in contexts like exposition of the Goedel incompleteness theorems. It is important to emphasize, for example, the fact that the Goedel sentence of a consistent theory is just simply true, and saying it this definitely is no more problematic than saying that the theory is consistent. --Trovatore (talk) 19:19, 9 April 2012 (UTC)
- There can be reasonable cases where the longer 'True in the standard model' is clarifying, but it is ultimately a redundancy except where one is comparing truth in one model non-standard and one standard. If the question is well-posed, it is either true or false aside from such a case (which nevertheless may arise).Julzes (talk) 17:49, 9 April 2012 (UTC)Specific, literally labeled models ('standard' and 'non-standard'), that is.Julzes (talk) 17:53, 9 April 2012 (UTC)
- Clearly I think true and provable are the same (not definitionally; they're equivalent by Godel). I'm not prepared to commit to the existence of a standard model, whatever that means, so using "true" to mean "true in the standard model" is discomforting. Of course, I can still reason about a standard model in certain cases, for example analyzing the standard model of PA when my metatheory is ZFC.--121.74.125.218 (talk) 13:46, 9 April 2012 (UTC)
- No, "true" does not mean "true in every model". "True in every model" is the same as "provable", which is definitely not the same as "true". --Trovatore (talk) 00:58, 9 April 2012 (UTC)
- Whereas I think "true" is a short way of saying "true in every model". And of course this is all relative to a theory.--121.74.125.218 (talk) 23:32, 8 April 2012 (UTC)
- I think the phrase "true in the standard model" is an excessively wordy way of saying "true". In a language that has a clear intended interpretation (such as the language of arithmetic), true is just true; it doesn't need to be relativized to anything. Provability and decidability, on the other hand, are always relative to some particular theory. Your truth in "other" models is also relative to some particular theory (otherwise, "model" of what, exactly?) --Trovatore (talk) 22:54, 8 April 2012 (UTC)
April 8
Counting paths between vertices
I was hoping you could help me with a quick probability/graph theory result, since you've been very helpful in the past. I have asked other random graph questions previously so sorry if you're getting sick of them! I think this one should be considerably simpler than my previous query, it's about counting the number of paths between 2 vertices.
Working in the Erdos-Renyi random graph model G(n,p), suppose . Suppose also , and finally suppose we have vertices such that ; i.e. there exists at least 1 path of length at most between the 2 vertices.
I need to show that with probability as , and are joined by fewer than paths of length . Apparently this can be done "using the first moment method".
To the best of my knowledge of the first moment method, this probably means "by pulling out obvious inequalities from the definition of expectation". Now I used a simple counting argument to show that if is the number of length-i paths between and , then . I know that with large probability for large , the number of vertices at distance from any given vertex is approximately .
So, I tried a counting argument to get and then divide through by this to get the probabilities (and expectation) given as per assumptions: ; then I rewrote and this is therefore and therefore . This is obviously not what we want: it doesn't become arbitrarily small as n tends to infinity, and obviously something's wrong. I'm not sure I used the right method but even if I did, I think part of the problem is that I'm using the assumption , whereas really our assumption is that v and w are at distance at most i, so we could have (e.g.) only paths shorter than i, in which case X could still equal 0. So, I guess my assumption is incorrect, but I'm not sure exactly how to fix it. Could anyone help me figure out how? As always, many thanks for your assistance. Delaypoems101 (talk) 03:41, 8 April 2012 (UTC)
- Any thoughts, anyone? I don't expect it's a very hard question for a more seasoned mathematician than myself! :-) Delaypoems101 (talk) 04:05, 10 April 2012 (UTC)
Polynomials modulo p
Given p, a prime, I am concerned with finding the roots of a polynomial f(x) mod p. Can anyone suggest where I might find information regarding algorithms I might use to investigate this? Thanks. meromorphic [talk to me] 10:45, 8 April 2012 (UTC)
- See our articles on factorization of polynomials over a finite field and irreducibility tests, Berlekamp's algorithm, and Cantor–Zassenhaus algorithm. Gandalf61 (talk) 09:18, 9 April 2012 (UTC)
"Free" Nonassociative ring?
I'm certain I'm reinventing the wheel here (albeit with perhaps a somewhat distasteful choice of terminology and notation, since aesthetics has never been my strong suit), so my prompt is simply: please point me to the right literature to find out more about this.
Let be a Nonassociative ring, in which we recall that multiplication is not required to be invertible, have an identity, be commutative, or even (hence the name) be associative. For the most basic instance of this construction, we could take . Let be a set of symbols, chosen to be disjoint from . We may define the set of all "monomials" recursively as follows:
- If , then the 1-tuple is a monomial. If , then .
- If and at least one of the two is not equal to for some , then the 2-tuple .
This definition is carefully constructed to avoid any accidental collision of elements, as long as a 1-tuple cannot be interpretted as a 2-tuple and vice-versa. A monomial is built to encode the tree of a fully parenthesized multiplicative expression in elements of and , with the caveat that adjacent multiplied elements of be collapsed into a single element of .
Define the ring as the set of all functions such that for all but finitely many . We think of this intuitively as . Addition is pointwise, and multiplication is fun to define (actually, it almost ends up being easier to define than it usually is, as a consequence of the fact that it's usually possible to extract the original factors from a product of two monomials, but that's a bit of a tangent). It trivially satisfies the requirement that it form a group under addition, since as an additive set, it is nothing more than a direct sum of with many copies of , and distributivity takes a bit more work, but it again follows as essentially a direct consequence of distributivity of multiplication over addition in .
Then satisfies a universal property: for any nonassociative ring with a map , and a function , there is a unique extension , with and for .
For nonassociative rings, the definition of a two-sided ideal needed to define a quotient ring is identical to that for a noncommutative ring (although the ideal generated by an element or subset of a nonassociative ring is a much worse monster). --COVIZAPIBETEFOKY (talk) 17:01, 8 April 2012 (UTC)
On Hypergeometric Functions
Given the integral formula for the hypergeometric function , I have to express the integral in terms of a combination of hypergeometric functions. I am keen to do as much of the legwork by myself as possible, so no answers or hints that would remove all challenge from the question please, but I have no clue how to express functions, however they may be represented, in terms of hypergeometric functions. Could someone please give me some help? Thanks. meromorphic [talk to me] 17:13, 8 April 2012 (UTC)
Hilbert's second problem
There is reference to : "Dawson (2006:sec. 2)" but no other identifying information. I would like to pursue reading his discussion; please help completing that reference; the citation itself & more on who Dawson is. Thank you. — Preceding unsigned comment added by 75.141.237.29 (talk) 18:20, 8 April 2012 (UTC)
- The Hilbert's second problem article has Harvard-style citations, which means that you need to look down in the References section to see which Dawson is being referred to and what he wrote in 2006. One advantage of the Harvard style is that footnotes can then be used for notes rather than for citations. Hopefully this is enough information for you to find the Dawson ref; if not, please feel free to ask more. --Trovatore (talk) 19:18, 8 April 2012 (UTC)
yes, thank you. This is my first time using this forum - apologies for any format/procedure errors. But mainly, thank you for re-directing me to the references section again - it was inverted from the sequence I was used to; that's what caught me up (looking at the bottom of the list rather than the top. CeptualInstitute. — Preceding unsigned comment added by 75.141.237.29 (talk) 20:03, 8 April 2012 (UTC)
Showing an integral is well defined
Given where a>0, , 0<b<1, I am supposed to show that this integral is well defined. I'm not entirely sure what this process constitutes though. Is it essentially just showing that this integral exists and is finite? If so, how do you do this without directly evaluating it? Thanks. meromorphic [talk to me] 19:51, 8 April 2012 (UTC)
- An improper integral is defined as a limit, so proving that the integral is well-defined means proving that the limit exists. You don't necessarily need to evaluate it to do that, although you'll probably need to get most of the way to evaluating it (if you know a particular term is finite you don't need to actually plug the numbers in and work it out, you can just move on to the next term). --Tango (talk) 21:46, 8 April 2012 (UTC)
- Can't you use Cauchy's residue theorem and a suitable choice of contour? I get the integrand having poles at z = ±ia, while the residues are ∓eiab respectively. Alternatively, try multiplying the numerator and denominator of the first quotient by the denominator of the second and vice versa. I got a cosine and a hyperbolic sine when I tried. — Fly by Night (talk) 02:29, 9 April 2012 (UTC)
April 9
297/140 seems like an unusual answer.
I don't think I've made a mistake - I've checked the math several times now - I guess I just want to make sure here because doesn't seem like a nice answer.
Calculate the line integral where and C is the path from (0,0,0) to (1,1,1) along the parametric curve .
This is what I did:
Since we make the substitutions ,, and the limits of the integral are 0 and 1.
Widener (talk) 05:27, 9 April 2012 (UTC)
- I can't fault your logic at any step; it seems straightforward enough. The final definite integral is also correct. Due to the quirkiness of the form of the "field" A and of the path C, the apparent complexity is not surprising, and I would not have expected a "nicer" answer; that the result should be rational is about all you could expect. — Quondum☏ 06:30, 9 April 2012 (UTC)
Second Order ODEs
Consider a second order homogeneous ODE of the form where a, b and c are constants with a ≠ 0. The general solution will be of the form where α and β are constants and ƒ and g are linearly independent functions, i.e. one is not a constant multiple of the other. Everything's straightforward when b2 – 4ac ≠ 0. When b2 – 4ac = 0 the particular solutions are of the form and where r is the repeated root of ax2 + bx + c = 0. The thing I don't see is where does come from? It obviously works. Every book I've seen on the subject simply checks that it works. Obviously and aren't linearly independent functions, and so another functions would be required. But where does come from, besides trial and error? Can anyone supply a decent online reference? — Fly by Night (talk) 17:31, 9 April 2012 (UTC)
- Sorry as this is OR, so make of it what you will (also I have no references, I simply give this in the hope that you might find it useful): take the limit as two distinct roots converge
- where and are adjusted as a function of to avoid the two dimensions of freedom becoming either zero or infinity. I'm sure you'll see how the required solution appears when you write out the Taylor's series for . — Quondum☏ 19:55, 9 April 2012 (UTC)
- Don't worry about OR, this is the reference desk. I can't seem to follow your argument to a satisfactory conclussion. The exponential function is an entire function. I took the limit and got:
- I computed the Taylor series, and didn't see anything; I also tried dividing through by ε but this gives an indeterminate expression as ε → 0. Could you be more specific, please? — Fly by Night (talk) 20:18, 9 April 2012 (UTC)
- Don't worry about OR, this is the reference desk. I can't seem to follow your argument to a satisfactory conclussion. The exponential function is an entire function. I took the limit and got:
- The way I was looking at it, I put
- If we now put , and , and then keep and constant (by varying inversely with and similarly finding a suitable way to vary ), the terms with tend to zero, the prior terms remain unchanged. The exponential function being entire is useful in that this ensures that the result will be independent of the path in the complex or real domain along which tends to zero. Not very rigorous perhaps, but better than trial-and-error. — Quondum☏ 20:47, 9 April 2012 (UTC)
- The way I was looking at it, I put
If , the differential operator factors as
Because of the commutator identity , we can rewrite this as
which is now easy to solve. Sławomir Biały (talk) 23:59, 9 April 2012 (UTC)
- Sławomir, could you please explain where the commutator identity came from and what its relevance is? Also
- which is just another homogeneous second order ODE whose characteristic equation has a repeated root. The solution of which is
- My original question was where does the factor of x come from? — Fly by Night (talk) 01:46, 10 April 2012 (UTC)
- Why would you expand out? You can solve this equation! The solution is , and this shows where your x comes from! Sławomir Biały (talk) 11:06, 10 April 2012 (UTC)
The ODE having characteristic equation roots and has the solutions for . The limiting case gives . This is where the factor of x comes from. Bo Jacoby (talk) 08:54, 10 April 2012 (UTC).
- It's worth remarking that the solutions all have the same initial conditions and , so the limit is a natural one to take. Sławomir Biały (talk) 11:28, 10 April 2012 (UTC)
Generalised permutation block matrices?
Fix an integer k > 0. We can form 'block' permutation matrices, by taking a genuine permutation matrix, and replacing the zeroes with zero k x k matrices, and ones with k x k identity matrices. Further, we can form generalised block permutation matrices by, instead of having identity matrices, using a collection of k x k integral matrices. Do these things have a name? Are they well studied? The group formed by such matrices decomposes as the semi-direct product of r direct copies, say, of GL(k,Z) (viewed as block diagonal rk x rk matrices), with the symmetric group on r letters acting by permuting the blocks. I'd like to learn as much about this group as possible, so any pointers would be great! Thanks, Icthyos (talk) 19:13, 9 April 2012 (UTC)
Help locating a book
If you feel this question belongs elsewhere please feel free to move it. I put it here because someone else may have had their interest in Maths piqued by it.
When I was 7 in the mid-60s, I read a book which outlined about a dozen well-known mathematical problems, such as the Tower of Hanoi and the 7 Bridges of Konigsberg. I've been racking my brains to remember what book this was but to no avail. I also have a feeling it may have been by Edward Kasner, as I'm sure I read in the same book about the naming of googol. Can anyone help with this book please? --TammyMoet (talk) 19:32, 9 April 2012 (UTC)
- I typed the author's name into Amazon and got Kasner, Edward; Newman, James (2001). Mathematics and the Imagination. Dover Publications. ISBN 978-0486417035.. It was originally published in 1971. It seems like a very long book for a seven year old; it has 400 pages. But according to the contents page it mentions the Towers of Hanoi and the Googol. On page 270 it mentions seven bridges too. Hope this helps. — Fly by Night (talk) 20:32, 9 April 2012 (UTC)
- Thanks for this. I can quite accurately date the book to before 1971 because of the library I got it from, which was in a town I moved from in 1968. I wonder if there was an earlier version? It was quite a long book as I remember, but I don't know about 400 pages. Anyway you've given me something to go on there! --TammyMoet (talk) 09:38, 10 April 2012 (UTC)
- Bingo! This reference says the book was originally written in 1940! Thanks again! --TammyMoet (talk) 09:40, 10 April 2012 (UTC)
Powers of i
I can say that , but why can't I say that ? It Is Me Here t / c 22:51, 9 April 2012 (UTC)
- There are four possible fourth roots of 1, 1 is just one of them. You don't even have to go to imaginary numbers to get the problem, e.g. using the same logic. Dmcq (talk) 23:20, 9 April 2012 (UTC)
- In general, exponentiation is not single-valued. A derivation of this sort is guaranteed to give a correct value of the exponential. --COVIZAPIBETEFOKY (talk) 23:26, 9 April 2012 (UTC)
- In concrete terms: , , and , so Take a look at our article on roots of unity. As an example of the confusion: working over the real numbers, it's tempting to think that when, in reality, — Fly by Night (talk) 00:17, 10 April 2012 (UTC)
- If you're confused by the above answers, good! Complex exponentiation is confusing, even more so because different definitions give different ways to approach the same thing. But I'll try to sum up:
- There's no problem when the exponent is an integer. is definitely . Also, since it's just rewriting the exponent.
- When the exponent is not an integer, the result is not a single value but a set of values, where is any one of the natural logarithms of a. (A different approach is to consider Riemann surfaces.)
- In general, and are not necessarily the same set. However, they do need to have at least one value in common.
- Your derivation shows that is necessarily equal to one of the values of , namely i. The fact that 1 is another value of has no bearing on the value of .
- -- Meni Rosenfeld (talk) 09:01, 10 April 2012 (UTC)
Can likelihood ratios used for estimation of post-test probability be continuous variables, or do they have to be dichotomous?
The article Pre- and post-test probability states in a table in the section Estimation of post-test probability, that using likelihood ratios for estimation of post-test probability has the disadvantage of requiring a dichotomous variable. Likewise, the article Likelihood ratios in diagnostic testing distinguishes between an LR+ and an LR-, and makes no mention of varying the cutoff. Is it really correct that the use of likelihood ratios in diagnostic testing requires that the variable be dichotomous? Take a hypothetical example where you know the distributions of a continuous test variable within the populations with and without disease. Can we not then calculate
- .
directly from the distributions?
I did some experimentation in R (programming language), using a hypothetical test with a normal distribution N(μ=70, sd=10) in the healthy population, and N(μ=95, sd=15) in the diseased population. Here is the function I used:
LR = function(arg)
{
eps=0.01
p_healthy_1 = pnorm(arg-eps, mean=70,sd=10)
p_healthy_2 = pnorm(arg+eps, mean=70,sd=10)
p_disease_1 = pnorm(arg-eps, mean=95, sd=15)
p_disease_2 = pnorm(arg+eps, mean=95, sd=15)
return ((p_disease_2-p_disease_1)/(p_healthy_2-p_healthy_1))
}
The function behaves more or less as I would have expected. Its value is 1.0 (i.e. indifference) at arg=82.35 (which is identical to the cutoff point I found by maximising the number of correct decisions). In the range 83 to 133, it rises steeply from 1.13 to 3359.45. When moving towards the origin from 82 to 50, it falls more gently from 0.94080371 to 0.05472334. 0.05472334, corresponding to an argument of 50, is a minimum. Then, when moving from 50 to 10, the function rises again, from 0.05472334 to 4.65981528, crossing LR=1 at arg=17.65869. I assume that the reason it rises at very low arguments, is that the higher SD of the distribution in the diseased population becomes more important than the difference in means between the populations. I'm unable to figure out why it happens exactly at that point, though.
Anyway, my question is: are continuous likelihood ratios obtained using a function such as the one I've described here valid for deriving a post-test probability from a pre-test probability, for example by using Fagan's nomogram? --95.34.141.48 (talk) 22:54, 9 April 2012 (UTC)
- Yes, this is perfectly valid, I don't know why the article would say it's not. Bayes's theorem and rule work also for continuous distributions. That is, for probabilities you have
- And more generally
- Where P is the value of the pdf (if T is continuous) or pmf (if T is discrete) at the particular instantiation. That this follows from the probability version can be seen by considering the event .
- The calculation using the graphs in the article of the Fagan monogram is then just a rearrangement of the terms. -- Meni Rosenfeld (talk) 08:38, 10 April 2012 (UTC)
April 10
two parts
first, is having a designated driver, even one who doesn't prefer the role, a nash equilibium? (since ifhe starts drinking he'll kill us all?)
secondly, if this is the case I would like a joke, a play on sobriety and 'equilibrium' but one that's funny. I can't think of one. 94.27.165.121 (talk) 01:04, 10 April 2012 (UTC)
my work so far
I know you like to see what I've done on my own, here is what I have: "What keeps anyone from getting drunk first? Nash equilibrium." I know, it doesn't really work. (I intended this one to be about social acceptability). Or: "The students all wanted to get drunk, but nobody wanted to be the one to get beer. They were in a Nash equilibrium".
This doesn't really work as a pun.
I've been trying for some time but I just can't do it on my own. Hope you can help here.
"What keeps professors from getting drunker than their students? Nash equilibrium..." this one 'sounds right' but doesn't make any sense. (it's meaningless). I'd like something along these lines but that works... 94.27.165.121 (talk) 01:12, 10 April 2012 (UTC)
- Take a look at our article Nash equilibria. — Fly by Night (talk) 01:49, 10 April 2012 (UTC)
- I read that article, which gives both formal and informal definitions, but does not use it in a joke. 134.255.115.229 (talk) 10:17, 10 April 2012 (UTC)
- Well, if dealing with driving, you might want to include the old car maker, Nash Motors. Let's see: "Two drunks drove an antique Nash convertible into a guardrail, and it was left hanging over the edge of a cliff. Neither wanted to try to get out, for fear they would send the car over. They were in a Nash equilibrium." StuRat (talk) 05:14, 10 April 2012 (UTC)
- That one's good, though a bit long. I wonder if it's possible to simplify by factor out the Nash Motors term (making it a simple instead of double pun) since 'equilibrium' is fine to pun on vs. 'precariousness' (edge of cliff) - you don't have to add nash motors to it to make the joke quadratic. To me, this is as good as maintaining one's equilibrium i.e. sobriety. Could you try simplifying the joke, Stu? 134.255.115.229 (talk) 10:17, 10 April 2012 (UTC)
Volumes of parallelepipeds in higher dimensions.
One of the basic things we are taught in Calc III is the area of the parallelogram formed by two vectors is simply given by . Additionally, the area of the parallelepiped formed by the three vectors is simply given by . First, I began thinking about how to simply find the area of the parallelogram formed by two vectors . If , for instance, we cannot use the formula above since we cannot calculate a cross product in four dimensional space (however, we can easily compute dot products). Thus, I formulated the following: . However, can be expressed in terms of the dot product. Thus, . This formula works for all , and can be shown to equal when . Doing this same process for three-dimensional parallelepipeds proved more difficult since the required angles were not as readily found. Additionally, if we wanted to to extend this further by finding the volume of a four-dimensional parallelepiped defined by four linearly independent vectors, how would one even do that? Even in the base case of vectors in I wouldn't know how to compute this "four-volume". How would one formulate a generic formula for the -volume of the parallelepiped formed by vectors in -dimensional space? — Trevor K. — 04:38, 10 April 2012 (UTC) — Preceding unsigned comment added by Yakeyglee (talk • contribs)
- I think it's just determinant of the matrix of the vectors determining the hyperparallelepiped. Widener (talk) 06:01, 10 April 2012 (UTC)
- So for example, a hyperparallelepiped formed by the four vectors in would have hypervolume . Widener (talk) 06:09, 10 April 2012 (UTC)
- As for finding the n-volumes of lower dimensional parallelepipeds in higher dimensional m-space, just find an orthonormal basis for the subspace of dimension n containing your parallelepiped (using Gram-Schmidt or some other way) and then it is the determinant of the matrix which transforms this basis into the vectors of your parellelepiped (you could do a coordinate transformation into the standard basis for to make things easier). In other words, I don't think there is a general formula (or if there is it would be complicated), the way you do this is more algorithmic. Widener (talk) 06:18, 10 April 2012 (UTC)
- The general name is parallelotope; follow the link for a more general approach to your problem. You may find it worthwhile to explore geometric algebra, which replaces the cross product () with the more useful wedge product (). — Quondum☏ 07:47, 10 April 2012 (UTC)
- The volume of k-dimensional parallelotope in n-dimensional space, formed by is 77.125.72.170 (talk) 09:34, 10 April 2012 (UTC)
- The general name is parallelotope; follow the link for a more general approach to your problem. You may find it worthwhile to explore geometric algebra, which replaces the cross product () with the more useful wedge product (). — Quondum☏ 07:47, 10 April 2012 (UTC)
Notation, conditional probability
A quick question: Do the notations and mean exactly the same thing? Thanks! --91.186.78.4 (talk) 09:46, 10 April 2012 (UTC)
- Trying it out with Venn diagrams gives me the feeling I got it exactly wrong: and ? Which version, if any, is the correct one? --91.186.78.4 (talk) 12:22, 10 April 2012 (UTC)
Octahedral and icosahedral symmetry
I'm asking this question for The Doctahedron: is it possible to have a polyhedron with both octahedral and icosahedral symmetry? (I don't think it's possible, but I'm not completely sure.) Double sharp (talk) 11:10, 10 April 2012 (UTC)