How I got my networked Brother printer to work with i386 drivers on a Debian Jessie x86/amd64 system

Specifically, I’m running Debian 8 (“Jessie”) with kernel 3.16.0-4-amd64 on a desktop computer connected to the router via ethernet cable, and our printer connects wirelessly. The printer is actually a multi-function device, Brother MFC-J835DW.

As a prerequisite to printing from Debian in general, install cups:

sudo apt-get install cups

The Debian printing tutorial might contain other helpful or necessary information.

I followed all the steps at Brother’s Debian/Ubuntu x86/amd64 prerequisites page, plus a few extra steps that probably weren’t necessary but I’ll list them anyway.

1. Try to install ia32-libs or lib32stdc++, but ia32-libs is not in the Debian repository and lib32stdc++ would’ve had a dependency conflict, and I am certainly not qualified to resolve that. I noticed that when I tried to install ia32-libs, APT told me that the package lib32ncurses5 had replaced it, so I just installed that:

sudo apt-get install lib32ncurses5

2. sudo apt-get install csh
sudo apt-get install tcsh

(I chose csh.)

3. sudo mkdir /var/spool/lpd

4. This might only be necessary for some scanner-related functions, not printing, but I did it anyway because I figured it wouldn’t hurt:

sudo apt-get install sane-utils

5. Brother also says this was only necessary for a completely different line of Brother products on Debian systems that don’t have psutils installed, but I was running a Debian system that didn’t have psutils installed, so I figured this couldn’t hurt either:

sudo apt-get install psutils

6. Download the .deb files for the lpr driver and cupswrapper driver to ~/Downloads.

7. cd ~/Downloads

8. sudo dpkg -i --force-all mfcj835dwlpr-3.0.1-1.i386.deb

(Obviously, replace that .deb filename with your own.)

9. sudo dpkg -i --force-all mfcj835dwcupswrapper-3.0.0-1.i386.deb


10. Check to make sure two Brother drivers, one LPR and one CUPS, were installed:

dpkg -l | grep Brother

My output was:

ii mfcj835dwcupswrapper 3.0.0-1 i386 Brother CUPS Inkjet Printer Definitions
ii mfcj835dwlpr 3.0.1-1 i386 Brother lpr Inkjet Printer Definitions
ii printer-driver-brlaser 3-3 amd64 printer driver for (some) Brother laser printers
ii printer-driver-ptouch 1.3-8 amd64 printer driver Brother P-touch label printers

11. Install the printer to the system via the web interface:
11a. Go to the CUPS web interface in a web browser: http://localhost:631/
11b. Click on Administration.
11c. Follow the prompts and click everything as appropriate.
11d. My Brother driver was not listed among the many Brother drivers available, for some reason, even though I was able to choose MFC-J835DW at the previous step just fine, so I clicked on Browse… to add my own .ppd file.
11e. I found mine in /usr/share/ppd/Brother/. If you can’t find yours at first, try running sudo updatedb to update your system files (or whatever updatedb does). You could also simply run locate *.ppd (which didn’t find my Brother .ppd file until I ran updatedb).

12. Click through the web interface to set the printer options. I was able to click on a button to autodetect my printer’s default options.

13. Print a test page.

Never in my life have I seen such a beautiful test-print page.

Posted in Computers, Freakin' sweet, Linux | Comments Off on How I got my networked Brother printer to work with i386 drivers on a Debian Jessie x86/amd64 system

If a divides b and a divides c, then a divides (b-c)

\(\)In reading about Euclid’s proof of the infinitude of prime numbers, the only part that wasn’t completely clear to me was this:

If \(p\) divides \(P\) and \(q\), then \(p\) would have to divide the difference of the two numbers, which is \( (P + 1) − P\) or just \(1\).

Well, I don’t know…why is that true? Why does a number have to divide the difference of two numbers if it divides each of those numbers separately?

It turns out that my textbook from my introduction to proofs class, Mathematical Proofs by Chartrand, Polimeni, and Zhang (a class that was taught by Dr. Zhang herself at Western Michigan University), contains the proof of a more general statement. (Note: For this theorem, the vertical bar \( \vert \), which looks identical to the absolute value symbol, is used to mean “divides”; that is, \( a ~\vert~ b \) i.f.f. \(a\) is a factor of \(b\), that is, \(a\) “guzzinta” \(b\) an integer number of times, that is, \( b \div a \) equals an integer.)

Theorem: If \( a~\vert~b \) and \( a ~\vert~c \), then \( a ~\vert~ (bx + cy) \) for all integers \(x\) and \(y\).

Proof: Let \( a ~\vert~ b \) and \( a ~\vert~ c \). Then there exist integers \(q_{\scriptscriptstyle 1}\) and \(q_{\scriptscriptstyle 2}\) such that \(b=aq_{\scriptscriptstyle 1}\) and \(c=aq_{\scriptscriptstyle 2}\). Hence, for integers \(x\) and \(y\),

bx + cy = aq_{\scriptscriptstyle 1}x + aq_{\scriptscriptstyle 2}y = a(q_{\scriptscriptstyle 1}x + q_{\scriptscriptstyle 2}y).

Since \( q_{\scriptscriptstyle 1} x + q_{\scriptscriptstyle 2} y \) is an integer, \( a ~\vert~ (bx + cy) \). $$\tag*{$\blacksquare$}$$

The specific case where \( x=1 \) and \(y=-1\) is used in Euclid’s proof of the infinitude of primes.

Aside from this abstract proof, it’s easy to see why the theorem would be true with a simple example. Consider \( a = 5 \) and \(b \) and \(c\) any multiples of \(5\), say \(25\) and \(100\). Since these are both multiples of \(5\), they are some multiple of \(5\) apart from each other, so their difference is also obviously a multiple of \(5\). It’s just counting by fives, or by whatever the factor is in the example you choose. It’s hard to think of a theorem that could be more obvious and intuitive, even to an elementary-schooler.

As a side note, Mathematical Proofs is an extremely good introduction to mathematical proofs. It is one of my favorite textbooks. It is detailed, thorough, and extremely long, with great explanations that they call “proof strategy” before many proofs and summaries that they call “proof analysis” after many proofs. It has chapters on sets, logic, direct proofs, proof by contradiction, mathematical induction, equivalence relations, functions, cardinalities of sets, number theory, calculus, linear algebra, topology, group theory, and ring theory. I love this book and I highly recommend it for anyone who might want to learn or review how to do a lot of basic and important proofs from many fields of mathematics.

Posted in Math, Theorems | Comments Off on If a divides b and a divides c, then a divides (b-c)

Prove that a geometric sequence converges to 0 using Bernoulli’s inequality

Here is a good problem from my first exam in Advanced Calculus (introductory real analysis) taught by Yuri Ledyaev at Western Michigan University.

\(\)Prove that \(\lim_{n \to \infty} \frac{2^n}{3^n} = 0\).

Proof: This proof uses Bernoulli’s inequality, which states that \( (1+\alpha)^n \geq 1 + \alpha n\) for all \(\alpha > -1\) and \(n \in \mathbb{N}\). The epsilon definition of the limit of a sequence says that \(\lim_{n \to \infty} a_n = L\) if, for any \(\epsilon > 0\), we can pick any \(n\) above a certain threshold and always get \(\left|~a_n – L~\right| < \epsilon\). The purpose of most proofs of this type is to use algebraic manipulation to find that threshold, which we call \(N\). For this limit, we need to show that for all \(n>N\), we’ll have \(\left|~\frac{2^n}{3^n} – 0~\right|<\epsilon\), or equivalently that \(\frac{1}{\frac{2^n}{3^n}} > \frac{1}{\epsilon}\). Observe,

\frac{1}{\frac{2^n}{3^n}} &=& \frac{1}{\left(\frac{2}{3}\right)^n} \\[1.1ex]
&=& \left(\frac{1}{\frac{2}{3}}\right)^n \\[1.1ex]
&=& \left(\frac{3}{2}\right)^n \\[1.1ex]
&=& \left(1 + \frac{1}{2}\right)^n \\[1.1ex]
&\geq& 1 + \frac{1}{2}n \\[1.1ex]
&>& 1 + \frac{1}{2}N \\[1.1ex]
&>& \frac{1}{\epsilon}

Thus, we have \(\frac{1}{2}N > \frac{1}{\epsilon} – 1\), so taking \(\frac{1}{2}N > \frac{1}{\epsilon}\) will surely suffice. It follows that we need \(N > \frac{2}{\epsilon}\), and choosing any \(n>N\) will ensure that \(\frac{1}{\frac{2^n}{3^n}}>\frac{1}{\epsilon}\), i.e., \(\left| ~ \frac{2^n}{3^n} – 0 ~ \right|<\epsilon\). $$\tag*{$\blacksquare$}$$ (The original version of this post had “series” instead of “sequence” in the title. The content of the post is unchanged.)

Posted in Math | Comments Off on Prove that a geometric sequence converges to 0 using Bernoulli’s inequality

Three-bean salad probability density problem

\(\)A recipe for three-bean salad includes three different types of beans, \(A\), \(B\), and \(C\). Let the relative weights (masses) of the three bean varieties in a given batch of salad be represented by \(X\), \(Y\), and \(Z\), respectively, such that \(X + Y + Z = 1\). And let the joint probability density of \(X\) and \(Y\) be given by \( f(x,y)=kx^2y \) for \(x>0\), \(y>0\), and \(x+y<1\), and \( f(x,y) = 0 \) otherwise.

Here is a graph of the joint probability density of \( (X,Y) \); that is, the above equation is only valid in the shaded region.
Graph of the pdf f(x,y)

a. Find \(k\).

To calculate what \(k\) is, we first have to realize that \( P(-\infty < X < \infty, -\infty < Y < \infty) = 1 \), so $$ \begin{eqnarray} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,y) \, dx \, dy &=& 1 \nonumber \\[9pt] \int_0^1 \int_0^{1-y} k x^2 y \, dx \, dy &=& 1 \nonumber \\[9pt] k \int_0^1 \left[\dfrac{1}{3}yx^3 \right]_0^{1-y} \, dy &=&1 \nonumber \\[9pt] k \int_0^1 -\frac{1}{3}y^4 + y^3 - y^2 + \frac{1}{3}y ~ \, dy &=&1 \nonumber \\[9pt] k \left[-\frac{1}{15}y^5 + \frac{1}{4}y^4 - \frac{1}{3}y^3 + \frac{1}{6}y^2\right]_0^1 &=& 1 \nonumber \\[9pt] k \left(\frac{1}{60} \right) &=& 1 \nonumber \\[9pt] k &=& 60 \\ \end{eqnarray} $$ b. Assuming \( X \), \( Y \), and \( Z \) are random variables, what is the probability that bean variety \( A \) makes up more than half the weight of beans in a given batch?

This question is asking for the probability \( P(X > 0.5) \). All we really need to do is add another line to our pdf graph, \( x=0.5 \), and re-integrate with new bounds of integration.

From the graph, we can see that the probability density of the shaded region is

P(X > 0.5) &=& \int_0^{0.5} \int_{0.5}^{1-y} 60x^2y \, dx \, dy \nonumber \\[9pt]
&=& 60\int_0^{0.5} \left[\frac{1}{3}yx^3\right]_{0.5}^{1-y} \, dy \nonumber \\[9pt]
&=& 60\int_0^{0.5} -\frac{1}{3}y^4 + y^3 – y^2 + \frac{1}{3}y ~- \frac{1}{24}y ~ \, dy \nonumber \\[9pt]
&=& 60\int_0^{0.5} -\frac{1}{3}y^4 + y^3 – y^2 + \frac{7}{24}y ~ \, dy \nonumber \\[9pt]
&=& 60 \left[-\frac{1}{15}y^5 + \frac{1}{4}y^4 – \frac{1}{3}y^3 + \frac{7}{48}y^2 \right]_0^{0.5} \nonumber \\[9pt]
&=& 60 \left(\frac{1}{120} \right) \nonumber \\[9pt]
&=& 0.5

c. What is the probability that bean variety \( C \) makes up more than half of the bean weight?

This question is asking for \( P(Z > 0.5) \), which is to say \( P(X+Y\leq0.5) \). We need yet another graph to help us define the region of the pdf that accounts for this probability.

Setting up the integrals is basically the same as above:
P(Z>0.5) &=& \int_0^{0.5} \int_0^{0.5-y} 60x^2y \, dx \, dy \nonumber \\[9pt]
&=& 60 \int_0^{0.5} \left[\frac{1}{3}yx^3 \right]_0^{0.5-y} \, dy \nonumber \\[9pt]
&=& 60 \int_0^{0.5} -\frac{1}{3}y^4 + \frac{1}{2}y^3 – \frac{1}{4}y^2 + \frac{1}{24}y ~ \, dy \nonumber \\[9pt]
&=& 60 \left[-\frac{1}{15}y^5 + \frac{1}{8}y^4 – \frac{1}{12}y^3 + \frac{1}{48}y^2 \right]_0^{0.5} \nonumber \\[9pt]
&=& 60\left(\frac{1}{1920}\right) \nonumber \\[9pt]
&=& \frac{1}{32}

d. Probability that none of the three varieties will take up more than half the weight.

This question is asking for \( P(X\leq0.5, Y\leq0.5, Z\leq0.5) \), which equals \( P(X\leq0.5, Y\leq0.5, X+Y > 0.5) \). Thus, the region for this probability is between \( y = 0.5 \), \( x = 0.5 \), and \(x+y = 0.5 \):

P(X\leq0.5, Y\leq0.5, X+Y > 0.5) &=& \int_0^{0.5} \int_{0.5-y}^{0.5} 60x^2y \, dx \, dy \nonumber \\[9pt]
&=& 60 \int_0^{0.5} \left[\frac{1}{3}yx^3 \right]_0^{0.5-y} \, dy \nonumber \\[9pt]
&=& 60 \int_0^{0.5} \frac{1}{24}y \left(-\frac{1}{3}y^4 + \frac{1}{2}y^3 – \frac{1}{4}y^2 + \frac{1}{24}y\right) ~ \, dy \nonumber \\[9pt]
&=& 60 \left[\frac{1}{15}y^5 – \frac{1}{8}y^4 + \frac{1}{12}y^3 \right]_0^{0.5} \nonumber \\[9pt]
&=& 60\left(\frac{3}{640}\right) \nonumber \\[9pt]
&=& \frac{9}{32}

e. Marginal density of \( (X, Z) \).

We can calculate this marginal probability function by simply replacing \( Y \) in the given density function with \(1-X-Z\). This is because, with the given restriction that \(X+Y+Z=1\), we have two degrees of freedom in the values of our variables, and we have a joint density function with two variables, i.e., \(f(x,y) = 60x^2y\), so we can use \(X+Y+Z=1\) to easily find the marginal probability function of either \( (X, Z) \) or \( (Y, Z) \). Substituting \( Y \) with \(1-X-Z\) gives us \(f(x,z) = 60x^2(1-x-z) = 60x^2 – 60x^3 – 60x^2z\).

f. Marginal density of \( Z \) alone.

Recall that the marginal density of a variable (or even two or more variables) is the probability density of that variable(s) irrespective of the values of the other variables. Since the marginal density of a variable(s) does not take into account anything about the other variable(s), we must determine the density of the former over all possible values of the latter. Thus, we calculate the marginal probability density of a continuous variable by integrating the joint probability function over all the possible values of the other variables.

In this case, it might not be obvious what function we need to integrate, and what variables we need to integrate with respect to. To wit, we don’t integrate the original density function of \( (X, Y) \); rather, we integrate \(f(x,z) \) over all possible values of \(x\), to get the marginal density of \(z\). This is the only function we have with \(z\) in it, so it’s the one we use. Finally, the bounds of \(x\) are \(0\) to \(1-z\); to see this, note that since the original density function was valid for \(y>0\), and since \(y=1-x-z\), the new function \(f(x,z)\) is valid for \(1-x-z>0\), or \(x<1-z\). $$ \begin{eqnarray} f_z (z) &=& \int_0^{1-z} 60x^2 - 60x^3 - 60x^2 z \, dx \nonumber \\[9pt] &=& 60 \left[\frac{1}{3}x^3 - \frac{1}{4}x^4 - \frac{1}{3}x^3z \right]_0^{1-z} \nonumber \\[9pt] &=& 60 \left[\frac{1}{3}(1-z)^3 - \frac{1}{4}(1-z)^4 - \frac{1}{3}z(1-z)^3 \right] \nonumber \\[9pt] &=& 60 \left[\frac{1}{3}(1-z)^3(1-z) - \frac{1}{4}(1-z)^4 \right] \nonumber \\[9pt] &=& 60 \left[\frac{1}{3}(1-z)^4 - \frac{1}{4}(1-z)^4 \right] \nonumber \\[9pt] &=& 60\left(\frac{1}{12}\right)(1-z)^4 \nonumber \\[9pt] &=& \frac{1}{5}(1-z)^4 \end{eqnarray} $$

Posted in Math | Comments Off on Three-bean salad probability density problem

A tricky joint probability density problem

\(\)Here is problem 7.1.9 from my current probability & statistics textbook, Probability and Statistical Inference by Bartoszynski and Bugaj, which I’m using in the Master’s-level Statistical Theory class taught by Dr. Bugaj herself at Western Michigan University:

Variables \(X\) and \(Y\) have the joint density \(f(x,y) = 1/y\) for \(0 < x < y < 1\) and \(f(x,y) = 0\) otherwise. Find \(P(X + Y > 0.5)\).

As with most joint pdf/cdf problems, the two most important and difficult things to do are identify the correct region over which to integrate and define the bounds of integration correctly. I made a basic graph of the region that we need to integrate over:

I don’t know how to shade a region, but the region we need is above the two graphed lines and below \(y=1\). The line with the positive slope is \(y=x\), and the line with the negative slope is \(x+y=0.5\), or \(y=-x+\frac{1}{2}\). The given pdf is only valid for \(0< x < y < 1\), i.e., when \(y > x\) but \(y < 1\), so that's why the line \(y=x\) is needed regardless of the specific question. And this specific question asks for the probability that \(X+Y>0.5\), i.e., when \(Y>0.5-X\), so we need to consider only the part of the pdf above the line \(y = -x + \frac{1}{2}\). Therefore, the region we need to integrate over is the quadrilateral with vertices \((0, \frac{1}{2}), (0, 1), (1, 1)\), and \((\frac{1}{4}, \frac{1}{4})\).

Whether we put the \(dx\) or \(dy\) on the inside or outside of the double integral, we have to break this region up into two sub-regions—at least, that’s the only way I know of to integrate this type of region. I’ll first show how to integrate this with \(dx\) on the outside and \(dy\) on the inside, and then vice versa.

(i) To integrate with the \(x\)-direction on the outside and the \(y\)-direction on the inside, the two sub-regions of this quadrilateral have to be from \(x=0\) to \(x=\frac{1}{4}\) and from \(x=\frac{1}{4}\) to \(x=1\). This makes the vertical bounds of integration \(y=-x+\frac{1}{2}\) to \(y=1\) and \(y=x\) to \(y=1\), respectively.

P(X+Y>0.5) &=& \int_0^.25 \int_{0.5-x}^1 \frac{1}{y}\,dy\,dx + \int_{0.25}^1 \int_x^1 \frac{1}{y}\,dy\,dx \nonumber \\[9pt]
&=& \int_0^{0.25} -\ln (0.5-x)\,dx + \int_{0.25}^1 -\ln x\,dx \nonumber \\[9pt]
&=& (0.25) + (0.75 + 0.25\ln 0.25) \nonumber \\[9pt]
&=& 1 + 0.25\ln 0.25 \nonumber \\[9pt]
&\approx& 0.6534 \\

You can do the left-hand integral using integration by parts, but in this case, it conveniently computes to \(0.25\), which I found by using my TI-83 or Wolfram Alpha.

(ii) To integrate with the \(y\)-direction on the outside and the \(x\)-direction on the inside, we have to divide the region into two sub-regions vertically. The bottom sub-region goes from \(x = 0.5 – y\) to \(x = y\) and from \(y = \frac{1}{4}\) to \(y = \frac{1}{2}\), and the top sub-region goes from \(x = 0\) to \(x = y\) and from \(y = \frac{1}{2}\) to \(y = 1\).

P(X+Y>0.5) &=& \int_{0.25}^{0.5} \int_{0.5-y}^y \frac{1}{y} \,dx\,dy + \int_{0.5}^1 \int_0^y \frac{1}{y}\,dx\,dy \nonumber \\[9pt]
&=& \int_{0.25}^{0.5} \left(\frac{y}{y} – \frac{0.5-y}{y}\right)\,dy + \int_{0.5}^1 1 \,dy \nonumber \\[9pt]
&=& \int_{0.25}^{0.5} \left(2 – \frac{1}{2y}\right) \,dy + \int_{0.5}^1 1 \,dy \nonumber \\[9pt]
&=& (0.5 + 0.5\ln 0.25 – 0.5\ln 0.5) + (0.5) \nonumber \\[9pt]
&=& 1 + 0.5\ln 0.25 – 0.5\ln 0.5 \nonumber \\[9pt]
&\approx& 0.6534

Looking at the second-to-last line of each of these two equation displays, we can see that \(1 + 0.25\ln 0.25\) and \(1 + 0.5\ln 0.25 – 0.5\ln 0.5\) add to the same number. Therefore,
1 + 0.25\ln 0.25 &=& 1 + 0.5\ln 0.25 – 0.5\ln 0.5\nonumber \\[9pt]
0.25\ln 0.25 &=& 0.5\ln 0.25 – 0.5\ln 0.5\nonumber \\[9pt]
0.5\ln 0.5 &=& 0.25\ln 0.25

How is that?! It results from the “logarithm power rule”: \(\log_a (b^y) = y\log_a b\). So with those natural logarithms above, we can bring the coefficient up as an exponent of the argument of the \(\ln\):
0.5\ln 0.5 &=& 0.25\ln 0.25\nonumber \\[9pt]
\ln (0.5^{0.5}) &=& \ln (0.25^{0.25}) \nonumber \\[9pt]
e^{\ln (0.5^{0.5})} &=& e^{\ln (0.25^{0.25})} \nonumber \\[9pt]
0.5^{0.5} &=& 0.25^{0.25} \nonumber \\[9pt]
\frac{1}{2^{\frac{1}{2}}} &=& \frac{1}{4^{\frac{1}{4}}} \nonumber \\[9pt]
\frac{1}{\sqrt{2}} &=& \frac{1}{(2^2)^{\frac{1}{4}}} \nonumber \\[9pt]
\frac{1}{\sqrt{2}} &=& \frac{1}{2^{\frac{1}{2}}} \nonumber \\[9pt]
\frac{1}{\sqrt{2}} &=& \frac{1}{\sqrt{2}}

Finally, I thought it was worth noting that the integration approach that seemed the most natural to me, dividing the region into two sub-regions vertically (i.e., number ii above), actually involved slightly harder algebra and a longer expression for the answer. Ultimately, though, it’s always best to choose the integration approach that you’re most comfortable with and confident in, unless you know the integral is going to be much harder that way for some reason (integration by parts, for example).

Posted in Math | Comments Off on A tricky joint probability density problem

The internet is the best

Specifically, Stack Exchange is the best.

I have an mp3 collection of Vivaldi’s complete works, which I acquired by…certain unspecified means, and all the mp3 files were arranged in sub-directories and sub-sub-directories, which is kind of a pain in the ass. So I wanted to copy all the mp3 files from these various depths of directories into a single, sub-directory-less folder.

After a few searches led me to some answers telling me how to copy a bunch of files of a certain type to a different folder while preserving the directory structure, I wised up and removed “recursive” from my search terms and found this wonderful thread at Stack Exchange. The question says, “Linux: Copy all files by extension to single dirrectory [sic]”. The user wanted to copy all files of a certain type from a series of folders and sub-folders to a new folder without preserving the folder tree. Just like me!

The answer that received the most votes and that worked for me was (after cd’ing into the top folder of this complete works collection):

find . -name "*.TIF" -exec cp {} new \;

except, of course, I replaced “TIF” with “mp3” and “new” with the full path of the destination folder.

So the find command is pretty useful. It can find all kinds of things. For instance, it can find files that have a certain set of permissions or that were modified on or after a certain date. I had never used it before today. I’m slowly inching my way up from Linux beginner to intermediate user…

Posted in Computers, Interwebs, Linux | Comments Off on The internet is the best

My recent computer upgrade and troubleshooting

This May, I upgraded the motherboard, CPU, and RAM of my 7-year-old computer and my machine got stuck in a reboot loop. It always POSTed successfully (though sometimes it beeped twice) and loaded the motherboard’s splash screen, but rebooted right after the splash screen. The motherboard was a Gigabyte GA-F2A78M-D3H. The CPU was the AMD A6 6420K. The RAM was a single brand-new 8-GB PC3 12800 DDR3 “Desktop Memory Multi” from PNY.

I could enter the BIOS and change/view everything there. Whether I entered the BIOS or not, it looked like the system hung and restarted when trying to load the OS. After the Gigabyte splash screen disappeared, the monitor’s color changed from black to a sort of dark maroonish-purple color like it always had, indicating it was loading Ubuntu (the OS on my SATA hard drive). This was what it was supposed to do—except, y’know, not restart. This Ubuntu-ish color was displayed for 1 or 2 seconds. No text or anything else appeared.

This happened when I booted to the regular old SATA hard drive which was functioning in my previous configuration and which I had no reason to believe was faulty. It happened when I tried to boot to an Ubuntu live CD. It happened when I tried to boot to an Ubuntu USB stick.

The USB stick was the most “successful”: after the BIOS splash screen, some text along the lines of “Ubuntu 12.04” appeared for several seconds, along with a little keyboard and mouse icon. It looked like a boot disk should look. But then the system restarted after a minute or two.

This also happened when I unplugged the DVD drive from both the power supply and motherboard, when I unplugged both it and the HDD in order to boot to USB, when I unplugged the HDD to boot to DVD, when I unplugged all the other leads like the LED lights and USB lead. This happened when I inserted the RAM stick into slot #1 or #4.

When I unplugged both the DVD drive and the HDD from the mobo, it did not restart after the Gigabyte splash screen; rather, it told me no bootable drive was detected and would I like to enter the BIOS. Clearly, loading an OS was the fail point.

In the BIOS’s M.I.T. options, I saw that the right amount of RAM was detected, along with the right CPU of the right caliber. The CPU temperature was always around 37-40 degrees Celsius. I tried RAM slots #1 and #4 because when I initially inserted it into the slot that’s labeled 1 on the mobo itself, the M.I.T. status screen showed 8 GB of RAM in slot #4 (and nothing in the others). Then when it was in the slot labeled 4, the M.I.T. status screen showed 8 GB in slot #1 (and nothing in the others). So, I don’t know, maybe there was some mislabeling by Gigabyte, but I doubted the RAM was a problem. Both the CPU fan and the case fan that came with the case spun just fine.

Finally, I should mention that the reason I was upgrading my PC was that I assumed my old motherboard or CPU was dying, because even though it (usually) booted into Ubuntu just fine, the computer couldn’t handle even the slightest resource-heavy task. Just opening Firefox, for instance, caused an instant reboot.

I gleaned from the internet that a dying mobo was a good bet, though a failing PSU could also have been the culprit. I wasn’t completely sold on the dying mobo explanation because I saw nothing like this on it. My new configuration was of course only partly new: it had the same old 7-year-old PSU in the same old 7-year-old case. In my old system, to try to diagnose its problem, I ran a memory stress test from the BIOS—actually, two of them: one for a couple hours and one overnight. Both times, the PSU seemed to handle it, because it didn’t fail or restart or anything, so I assumed the PSU was working fine. The old computer could remain on for a few days, logged in to Ubuntu doing nothing, so I don’t know if that means the PSU was fine or if it was merely strong enough to handle idling but wasn’t strong enough to handle the more resource-heavy task of powering a new, improved mobo and new, improved CPU.

My new motherboard’s BIOS has no option for a RAM stress test—believe me, I looked everywhere four or five times. Kinda sucky. I would have liked to rule out defective new RAM before buying a new power supply, but I knew of no way to do that.

So I bought a new power supply and that solved every problem.

Actually, I bought a new case that came with a 500-watt PSU, because it was a great deal at Newegg and having a new case with USB 3.0 connections and other benefits sounded like a good idea, especially since this case/PSU combo was cheaper than most 500-watt power supplies by themselves. (No, it isn’t cheap or chintzy. It’s Rosewill, a good and reliable budget brand. My old case/PSU was also Rosewill, and it lasted over 7 and a half years.)

This makes me wonder if my old CPU and motherboard are actually perfectly healthy and could be revived into a low-powered budget computer. I have no need for that, nor anywhere to put it nor anything to do with it, but I won’t throw them away just yet.

A final note: the brass motherboard standoffs that come with any new motherboard are really necessary! Don’t forget them! When I put my new motherboard into my new case, I forgot about them, and the CPU and motherboard couldn’t even get any power. I was distraught and panicky, until I noticed that I hadn’t used these little cylindrical brass things that came in a small baggy and the motherboard wasn’t really screwed into the case in a way that seemed normal and correct to me. When I applied the brass standoffs correctly, everything worked great and I haven’t had a single hardware problem since. I don’t know how the case/PSU/mobo knows not to supply electricity to a mobo that is directly contacting the case, but it’s a good thing it does!

Posted in Computers | Comments Off on My recent computer upgrade and troubleshooting

Proving that a particular sequence is a Cauchy sequence

\(\)Here is one of my favorite homework problems from my Advanced Calculus (introductory real analysis) class at Western Michigan University. It is problem 7 from Chapter 1.6 of Advanced Calculus: Theory and Practice by John Petrovic.

Let \( 0 < r < 1\) and let \(M>0\). And suppose that \(\{a_n\}\) is a sequence such that, \(\forall n \in \mathbb{N}\), \(|a_{n+1} – a_n| \leq Mr^n\). Prove that \(\{a_n\}\) is a Cauchy sequence.

Proof: Recall that \(\{a_n\}\) is a Cauchy sequence i.f.f. \(\forall \epsilon > 0\), \(\exists N\) such that, \(\forall m > n > N\), \(|a_m – a_n| < \epsilon\). Informally, the given information about \(\{a_n\}\) means that any two consecutive terms of the sequence are separated by no more than a number \(M\) multiplied by a number \(r^n\) that is less than \(1\); how much less than \(1\) depends on how large \(n\) is. So the product \(Mr^n\) approaches \(0\) as \(n\) approaches \(\infty\). We have to prove that a sequence with this property also has the defining property of a Cauchy sequence. If you're not fresh up on Cauchy sequences, one important thing about them is that, since \(m>n\), there might be several (or millions or quintillions) of terms between \(a_n\) and \(a_m\), which we represent by \(a_{n+1}, a_{n+2}, … , a_{m-1}\). Thus, using the ol’ trick of adding and subtracting the same thing (many times) so as not to change the value of the expression,

|a_m – a_n| &=& |(a_m – a_{m-1}) + (a_{m-1} – a_{m-2}) + … + (a_{n+2} – a_{n+1}) + (a_{n+1} – a_n)| \nonumber \\[5pt]
&\leq& |a_m – a_{m-1}| + |a_{m-1} – a_{m-2}| + … + |a_{n+2} – a_{n+1}| + |a_{n+1} – a_n|

(by the Triangle Inequality).

But notice: Since we are given that \(|a_{n+1} – a_n| \leq Mr^n\) for all \(n \in N\), it follows that \(|a_m – a_{m-1}| \leq Mr^{m-1}\) and \(|a_{m-1} – a_{m-2}| \leq Mr^{m-2}\) and \(|a_{n+2} – a_{n+1}| \leq Mr^{n+1}\) and \(|a_{n+1} – a_n)| \leq Mr^n\). Therefore, we can pick up where we left off:

|a_m – a_{m-1}| + … + |a_{n+1} – a_n| &\leq& |Mr^{m-1}| + |Mr^{m-2}| + … + |Mr^{n+1}| + |Mr^n| \nonumber \\[5pt]
&=& Mr^n (r^{m-1-n} + r^{m-2-n} + … + r + 1) \nonumber \\[5pt]
&<& Mr^n \left(\frac{1}{1-r}\right) ~ [by ~ a ~ property ~ of ~ geometric ~ series] \nonumber \\[5pt] &=& \frac{Mr^n}{1-r} \nonumber \\[5pt] \end{eqnarray} $$ We have shown that \(|a_m - a_n| < \frac{Mr^n}{1-r}\). To prove that \(\{a_n\}\) is a Cauchy sequence, we must show that \(|a_m - a_n| < \epsilon\). Thus, we should choose \(N\) such that \(\frac{Mr^N}{1-r} < \epsilon\), and it will follow that the same inequality holds for \(n\), which is greater than \(N\). Observe: $$ \begin{eqnarray} \frac{Mr^N}{1-r} &<& \epsilon \nonumber \\[5pt] r^N &<& \frac{\epsilon(1-r)}{M} \nonumber \\[5pt] \ln(r^N) &<& \ln\left(\frac{\epsilon(1-r)}{M}\right) \nonumber \\[5pt] N\ln r &<& \ln\left(\frac{\epsilon(1-r)}{M}\right) \nonumber \\[5pt] N &>& \frac{\ln\left(\frac{\epsilon(1-r)}{M}\right)}{\ln r} ~ [switch ~ inequalities ~ because ~ \ln r < 0!] \end{eqnarray} $$ Then for all \(m > n > N\), \(|a_m – a_n| < \epsilon\) for any \(\epsilon > 0\), meaning \(\{a_n\}\) is a Cauchy sequence. \(\blacksquare\)

I wasn’t able to include a hyperlink in an equation display, but the property of geometric series I referred to above, to justify a strict inequality, can be found here, for example.

Posted in Math | Comments Off on Proving that a particular sequence is a Cauchy sequence

My (possibly) favorite episode of The Simpsons: “I Love Lisa”

If I “had” to choose a favorite episode of The Simpsons, I wouldn’t, because about five would be tied as my favorites. Off the top of my head, I’d choose “I Love Lisa”, “Twenty-Two Short Films About Springfield”, “Lemon of Troy”, “Last Exit to Springfield”, and “Homer vs. the Eighteenth Amendment” (what an unfortunate title; it should have been titled “The Beer Baron”).

Anyway, here is an email I sent to Robbie and Matt of The Simpsons Show podcast, who ask their listeners to email them explaining why an episode that they’ll be reviewing soon is the listener’s all-time favorite:

Matt and Robbie,

Like any good Simpsons fan, I have a very hard time even choosing a top 5 or 10 episodes, much less a number 1. Whenever I think about my favorites and what I’d choose if I had to rank them, my mind keeps coming back to “I Love Lisa”. In your last episode you said you expected to rank “I Love Lisa” higher than it deserves, which was surprising because it is one of the very best episodes they’ve ever made. It’s a perfect half-hour of comedy. It’s also better than “Marge vs. the Monorail” in every meaningful way. First, it hits the emotional, character-driven notes very well with the Lisa and Ralph relationship, while still making those parts funny (“So, do you like…stuff?” “You can pinpoint the exact moment his heart breaks in half.”) Let’s also not forget Ralph’s amazing transformation into a skilled thespian, which was due entirely to his travails with young love and heartbreak.

Second, the episode has at least two iconic things that have entered mainstream popular culture and endured to this day, not losing any of their original hilarity: the association of “Monster Mash” with Valentine’s Day, and the “I Choo-Choo-Choose You” card that Lisa gives Ralph.

It’s also worth noting that this episode doesn’t rely on any unrealistic or fantastical devices like “Marge vs. the Monorail” does—I’m thinking specifically of Leonard Nimoy beaming away and Homer singing the Flintstones song at the beginning. This episode also isn’t a parody or an adaptation of any preexisting story that I know of, unlike “Marge vs. the Monorail” (“The Music Man”).

Finally, on a more subjective note, I just feel like “I Love Lisa” is a perfectly structured, perfectly paced, perfectly plotted episode in which everything comes together more perfectly than any I can remember—the dialog, the jokes, the story, the characters, the emotions. Like “Marge vs. the Monorail”, “I Love Lisa” doesn’t have a B plot, but I feel like there’s more to this episode. I never get to the end of this one and think, “Oh, that’s it? It’s already ending?”

It’s true that the mediocre presidents song doesn’t quite measure up to the monorail song, but “I Love Lisa” even manages to include two of the most hilarious scenes with Principal Skinner, which embody him as a character as perfectly as anything else, in an episode that isn’t even about Skinner: the Vietnam flashback on Valentine’s Day (“Johnny. Johnny! Johnyyyyyy!”), and “Welcome to a wonderful evening of theater and picking up after yourselves.”

I only compare the two episodes directly because you recently crowned “Marge vs. the Monorail” your new #1 and because they aired so close to each other—they’re on the same disc in the season 4 DVD set. I’ve watched that whole disc recently because your glowing reviews have finally inspired me to give into nostalgia and watch along with you, but I couldn’t help watching ahead a few episodes. So please don’t feel guilty about loving this episode as much as you do, because I probably love it more and will probably be unable to resist watching it again after I hear your review of it. And again next Valentine’s Day, just like last Valentine’s Day and the one before that….

Keep up the great work,

John Petrie

I became internet-famous when Robbie read it on the air (go to 34:12).

(And I only plan on sending Matt and Robbie one email about a supposed favorite, so I guess for the purposes of their podcast and my public record thereon, I’ve committed to “I Love Lisa”. It really, truly is a perfect half-hour of comedy.)

Posted in Entertainment, Interwebs, TV | 1 Comment

More Simpsons trivia team names

I occasionally peruse the team names at the Woo Hoo! Classic Simpsons Trivia page because…what better way to spend my free time during my work breaks and such? Here are my recent favorites:

Uh, Dan, sir, people are becoming a bit…. confused by the way you and your co host are well, constantly holding hands

Because Woo Hoo Classic Trivia Brooklyn couldn’t exist without six white stripes, seven red stripes, and a hell of a lot of Dans!

We’re here, we’re queer, we dont want any more dans

Our theory is: Simon likes dog food!

Dan’s Moms say they’re cool. [On the night when both Dans’ parents attended]

Christmas Ape Goes to Trivia Night

Are “poo” and “ass” taken?

The Non-Giving-Up Trivia Guys

Jeremy’s I. Ron Butterfly

You Have 30 Minutes to Name Your Team. You Have 10 Minutes to Name Your Team. Your Team Has Been Impounded. Your Team Has Been Crushed Into a Cube.

A Little Team Called “Love Is” – They Are Two Naked 8 Year Olds Who Are Married

You know those trivia nights where the two Dans with annoying voices yammer back and forth? We invented those!

Dan = White, Dan = White

The story of how two Dans and five other men parlayed a small business loan into a thriving trivia concern is a long and interesting one. And here it is.

Welcome to an Evening of Trivia and Picking up After Yourselves

Harry Shearer’s Non-Union Mexican Equivalents

The Only Monster Here is the Trivia Monster Who Has Enslaved This Bar. I Call Him Trivior! And It’s Time To Snatch This Bar From His Neon Claws!

To find the Dans, I just have to think like the Dans. I’m a big trivia host wannabe and i make the same stupid jokes every month … Berry Park!

♫ I hate every Dan I see, from Dan Mulhall to Dan Ozzi. No, you’ll never make a Daniel out of me… ♫

Posted in Humor, TV | Comments Off on More Simpsons trivia team names

Proof that if f and g are continuous functions, then f/g is also continuous (as long as g(x) ≠ 0)

In almost any calculus or analysis textbook, in the chapter on continuity of functions, you’ll encounter four theorems about the operations on functions that preserve continuity: multiplying a continuous function by a scalar (real number), adding two continuous functions, multiplying two continuous functions, and dividing two continuous functions. But no textbooks that I have seen, nor websites nor lecture notes nor study guides nor homework solutions nor Stack Exchange questions nor anything else I’ve found online, have the proof to the last one! On the chance that I can add something new to the internet for the first time, here is the proof that Professor Yuri Ledyaev did in my Advanced Calculus (introductory real analysis) class at Western Michigan University:

Let \(f, g\) be two continuous functions with domain \(A \subset \mathbb{R}\) and let \(a \in A\). Let \(g(a) \neq 0\). Then \(f/g\) is also continuous at point \(a\). The proof is nearly identical if \(A \subset \mathbb{R}^n\) —which is in fact the way we did the proof, in the first chapter on functions of multiple variables—but there’s no way I’m typing every single \(x\) and \(a\) in vector form in Latex. Just imagine they’re all bold or they have a little line over them, and imagine that the \( \left|~x-a~\right| < \delta \) are all \( \vec{x} \in B_\delta \vec{a} \).

Proof: For the function \(f/g\) to be continuous, this means
\lim_{x \to a} \frac{f(x)}{g(x)} = \frac{f(a)}{g(a)}.

The way to prove this equality is to apply the Cauchy definition of continuity, i.e., \(\forall \epsilon > 0\), \(\exists \delta > 0\) such that if \(\left|~x – a~\right| < \delta\), then \(\left|~\frac{f(x)}{g(x)} - \frac{f(a)}{g(a)}~\right| < \epsilon\). To obtain this inequality, we start with the latter absolute-value expression, get a common denominator, use the ol' add-and-subtract-the-same-thing trick, and apply the fact that both \(f\) and \(g\) are individually continuous. Observe: $$ \begin{eqnarray} \left|~\frac{f(x)}{g(x)} - \frac{f(a)}{g(a)}~\right| &=& \frac{\left|~f(x)g(a) - f(a)g(x)~\right|}{\left|g(x)~\right| \left|~g(a)~\right|} \nonumber\\[13pt] &=& \frac{\left|~f(x)g(a) - f(a)g(a) + f(a)g(a) - f(a)g(x)~\right|}{\left|~g(x)~\right| \left|~g(a)~\right|} \nonumber \\[13pt] &\leq& \frac{\left|~f(x)g(a) - f(a)g(a)~\right| + \left|~f(a)g(a) - f(a)g(x)~\right|}{\left|~g(x)~\right| \left|~g(a)~\right|} \nonumber \\[13pt] &=& \frac{\left|~g(a)~\right| \left|~f(x) - f(a)~\right| + \left|~f(a)~\right| \left|~g(x) - g(a)~\right|}{\left|~g(x)~\right| \left|~g(a)~\right|} \nonumber \\ \end{eqnarray} $$ (The \(\leq\) comes from the Triangle Inequality.) Now, we have to interrupt our equation display to make use of the fact that \(g\) is continuous. Since \(g\) is continuous, we can let \(\epsilon_{\small{1}} = \frac{1}{2} \left|~g(a)~\right| > 0\). Then there exists \(\delta_{\small{1}}\) such that whenever \(\left|~x-a~\right|<\delta_{\small{1}}\), $$ \begin{eqnarray} \left|~g(a)~\right| - \left|~g(x)~\right| &\leq& \left|~g(x) - g(a)~\right| < \epsilon_{\small{1}} \nonumber \\[13pt] \left|~g(a)~\right| - \epsilon_{\small{1}} &<& \left|~g(x)~\right| \nonumber \\[13pt] \frac{1}{2} \left|~g(a)~\right| &<& \left|~g(x)~\right| \nonumber \\[13pt] \frac{2}{\left|~g(a)~\right|} &>& \frac{1}{\left|~g(x)~\right|}
(Again the \(\leq\) in this equation display is due to (one form of) the Triangle Inequality.)

Applying this inequality to the \(\left|~g(x)~\right| \) in the denominator up above gives us
\frac{\left|~g(a)~\right| \left|~f(x) – f(a)~\right| + \left|~f(a)~\right| \left|~g(x) – g(a)~\right|}{\left|~g(x)~\right| \left|~g(a)~\right|} \nonumber \\[13pt]
< \frac{2}{\left|~g(a)~\right|^2} \cdot \Big[ \left|~g(a)~\right| \left|~f(x) - f(a)~\right| + \left|~f(a)~\right| \left|~g(x) - g(a)~\right| \Big] \nonumber \\[13pt] = \frac{2}{\left|~g(a)~\right|} \left|~f(x) - f(a)~\right| + \frac{2\left|~f(a)~\right|}{\left|~g(a)~\right|^2} \left|~g(x) - g(a)~\right| \nonumber \\[13pt] \end{eqnarray} $$

(I apologize for the funky appearance of that last equation display; I’m not good enough with Latex…or WordPress or anything else, for that matter…to make those long expressions all fit into the allowable space; the right side always went over into the sidebar.)

Now, since \(f\) is continuous, we know that for any \(\frac{\epsilon}{2}\), there exists \(\delta_2\) such that making \(\left|~x – a~\right| < \delta_2\) will make \(\left|~f(x) - f(a)~\right| < \frac{\epsilon}{2}\cdot\frac{\left|~g(a)~\right|}{2}\).

Similarly, since \(g\) is continuous, there exists \(\delta_3\) such that making \(\left|~x – a~\right| < \delta_3\) will make \(\left|~g(x) - g(a)~\right| < \frac{\epsilon}{2}\cdot\frac{\left|~g(a)~\right|^2}{2\left|~f(a)~\right| + 1}\). (The \(+1\) must be added in case \(f(a) = 0\).)

Finally, we simply choose \(\delta = \min{\{\delta_1, \delta_2, \delta_3\}}\), and this will give us

\left|~\frac{f(x)}{g(x)} – \frac{f(a)}{g(a)}~\right| &<& \frac{2}{\left|~g(a)~\right|} \left|~f(x) - f(a)~\right| + \frac{2\left|~f(a)~\right|}{\left|~g(a)~\right|^2} \left|~g(x) - g(a)~\right| \nonumber\\[13pt] &<& \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon \end{eqnarray} $$ $$\tag*{$\blacksquare$}$$ To summarize, this proof showed that if \(f\) and \(g\) are continuous, then for any \(\epsilon > 0\), letting \(\left|~x-a~\right| <\delta\) will make \(\left|~\frac{f(x)}{g(x)} - \frac{f(a)}{g(a)}~\right| < \epsilon\), meaning \(\frac{f(x)}{g(x)}\) is continuous at \(x=a\). (This post has been edited to correct a mathematical error found by L. Anderson; see comments.)

Posted in Math, Theorems | 3 Comments

Convergence of a difficult integral using the limit comparison test

Here’s a great problem from an exam in my second-semester Advanced Calculus (introductory real analysis) course taught by Yuri Ledyaev at Western Michigan University:

Find the values of \(p\) for which the integral converges:
\int_{1}^{\infty} \frac{\left(\tan\frac{1}{x}\right)^p}{x+x^2}

To determine what test to use, it is best to recall that
\lim_{\theta \to 0} \tan\theta \sim \theta

\lim_{\theta \to 0} \frac{\tan\theta}{\theta} = 1

which is a fancy way of saying that at very small values of \(\theta\), \(\tan\theta\) behaves like \(\theta\). (In case you forget this, it is easy to recall by remembering that \(\tan\theta = \frac{\sin\theta}{\cos\theta}\), whose denominator approaches \(1\) as \(\theta\) approaches \(0\), so \(\tan\theta\) behaves like \(\sin\theta\) for very small values of \(\theta\). All semi-advanced math students should remember the small-angle approximation rule, i.e., for small values of \(\theta\), \(\sin\theta \approx \theta\).)

We can substitute \(1/x\) for \(\theta\) to see that
\lim_{x \to \infty} \tan\frac{1}{x} \sim \lim_{x \to \infty} \frac{1}{x}

The reason we’re interested in this is that we need to know what \(\frac{\left(\tan\frac{1}{x}\right)^p}{x+x^2}\) behaves like or looks like as \(x \rightarrow \infty\). Whatever our integrand looks like is what we’ll compare it to in the limit comparison test. So:

\lim_{x \to \infty} \frac{\left(\tan\frac{1}{x}\right)^p}{x+x^2} \sim \lim_{x \to \infty} \frac{\left(\frac{1}{x}\right)^p}{x+x^2} \nonumber = \lim_{x \to \infty} \frac{1}{x^{p+1} + x^{p+2}} \nonumber \\

In this case, it is a good bet to choose the term in the denominator with the greater exponent rather than the term with the lesser exponent for use in the limit comparison test, so we’ll choose \(x^{p+2}\). That is, for the limit comparison test, let \(f(x) = \frac{\left(tan\frac{1}{x}\right)^p}{x+x^2}\) and \(g(x) = \frac{1}{x^{p+2}}\).

The limit comparison test for integrals says that if \(f\) and \(g\) are both defined and positive on \([a, \infty)\) and integrable on \([a, b]\) for all \(b \geq a\), and if \(\lim_{x \to \infty} \frac{f(x)}{g(x)}\) exists and is not equal to \(0\), then the integrals \(\int_{a}^{\infty} f(x) dx\) and \(\int_{a}^{\infty} g(x) dx\) are equiconvergent.


\lim_{x \to \infty} \frac{f(x)}{g(x)} &=& \lim_{x \to \infty} \frac{\frac{\left(\tan\frac{1}{x}\right)^p}{x+x^2}}{\frac{1}{x^{p+2}}} \\[3pt] \nonumber \\
&=& \lim_{x \to \infty} \frac{\left(\tan\frac{1}{x}\right)^p}{x+x^2} \cdot x^{p+2} \\[3pt] \nonumber \\
&\sim& \lim_{x \to \infty} \frac{\frac{1}{x^p}}{x+x^2} \cdot x^{p+2} \\[3pt] \nonumber \\
&=& \lim_{x \to \infty} \frac{x^2}{x+x^2} \\[3pt] \nonumber \\
&=& 1

Thus, our choices of \(f(x)\) and \(g(x)\) satisfy the limit comparison test, meaning \(\int_{a}^{\infty} f(x) dx\) converges when \(\int_{a}^{\infty} g(x) dx\) converges.

When does \(\int_{a}^{\infty} g(x) dx\) converge? When \(p+2 > 1 \) (by the p-series rule). Thus, both \(\int_{a}^{\infty} f(x) dx\) and \(\int_{a}^{\infty} g(x) dx\) converge when \(p>-1\) and diverge when \(p \leq -1\).

Posted in Math | Comments Off on Convergence of a difficult integral using the limit comparison test

My favorite Simpsons trivia team names

I’ve wasted several hours this week reading through the names of teams at the classic Simpsons trivia nights that are held in certain restaurants in Chicago, Vancouver, Brooklyn, Toronto, and Hamilton, Ontario. Many of them are hilariously clever. Naturally, I started thinking up some of my own that I’d like to use, even though there is approximately no chance I will ever be able to attend one of these trivia nights.

Below are two lists of names: the ones I’ve thought of and the ones I’ve seen at the website above. My first one listed is the only one I’ve noticed among the actually used team names (after I thought of it! great minds think alike!). I often refer to Neil Arsenty or Chicago’s Pizzeria Serio in my hypothetical names because I live a few hours from Chicago and can at least dream of visiting Chicago sometime and attending their trivia night. Also I love the Simpsons Mixtape podcast, whose hosts regularly attend the one in Chicago, and the Worst Episode Ever podcast, whose hosts regularly attend and host the one in Brooklyn, respectively.

My hypothetical team names:

Oh, Neil. I’d be lying if I said my team wasn’t committing crimes [Alternative: Oh, Neil. I’d be lying if I said my team wasn’t cheating.]

We told you: we’re not Xena!

Mister Moe

Or, in honor of its discoverer, the Teamahedron, hm-hey, hm-hey

No. No. No. No. No. No. No. No. No. No. No. Yes—I mean no, no.

Tie good. You like trivia?

Ooh! Look at me! I’m making people happy! I’m the trivia man, from Trivia Land, in a gumdrop house on Lollipop Laaaaaane!

Losin’ at trivia? Oh, you better believe that’s a paddlin’

Whaddya meeeeeean, the pizzeria’s out of money?

In America, forst you get the trivia, then you get the donuts, then you get the women.

There’s William Henry Harrison, “We lost by 30 points!”

Why must you turn this pizzeria into a house of LIES?

Tell you what: we drive all the way to Chicago and we lose miserably, I owe you a Coke.

“Okay, Mr. Burns, uh, what’s your team’s name?” “I don’t know…”

I can’t believe it’s a trivia team

You know, I’ve had a lot of jobs: boxer, mascot, astronaut, imitation Krusty, baby proofer, trucker, hippie, plow driver, food critic, conceptual artist, grease salesman, carny, mayor, drifter, bodyguard for the mayor, country-western manager, garbage commissioner, mountain climber, farmer, inventor, Smithers, Poochie, celebrity assistant, power plant worker, fortune cookie writer, beer baron, Kwik-E-Mart clerk, homophobe, and missionary, but hosting Simpsons trivia, that gives me the best feeling of all.

We’re going out for trivia! If we don’t come back, avenge our deaths!

“We got first prize!” “You won first place at trivia?” “No, but we got it…….. Stealing is wrong.”

Ow, my eye! I’m not supposed to get trivia in it!

We watched all the classic Simpsons episodes really closely, so when we came to Classic Simpsons Trivia, the answers were stuck in our heads. It was like a whole different kind of cheating!

More testicles mean more iron

There’s very little meat in these trivia cards

I have had it with these trivia nights, Neil! The low score totals, team after team of ugly, ugly people.

If you want to play Simpsons trivia, and I mean really play it, you want the Carnivale

D’oh! A deer! A female deer!

No one would PRETEND to be a last-place Simpsons trivia team

“Johnny Tightlips, do you know the answer?” “Eh, I know a lot of things.”

Neil, this circle is you.

Mona Stevens, Penelope Olsen, Martha Stewart, and Muddie Mae Suggins

My favorites from the official trivia organization’s web page above:

Tiffany, Heather, Cody, Dylan, Dermott, Jacob, Taylor, Brittany, Wesley, Rumer, Scout, Cassidy, Zoe, Chloe, Max, Hunter, Kendall, Caitlin, Noah, Sasha, Morgan, Kyra, Ian, Lauren, Qbert, Phil, Neil

Trivia at This Time of Year? At This Time of Day? In This Part of the Country? Localized Entirely Within This Pizzeria?

Mr. Arsenty? We All Have Nosebleeds

More Winnin’, Les Winen

Our Team May Be Ugly and Hate-filled, But Wait, What’s the Third Thing You Said?

Our Team No Function Beer Well Without

Our team’s low score is the result of an unrelated alcohol problem

They Slept, They Stole, They Were Rude To The Other Players. But Still, There Goes The Best Damn Team A Trivia Night Ever Saw


This Team Engaged in Intercourse with Your Spouse or Significant Other. Now THAT’S Trivia!

Our Team Name is Agnes. It Means Lamb! Lamb of God!

Remember Our Team Name? We’re Back In Pog Form!

The bottom rung of society now that that cold snap killed all those hobos

I for one would like to see the trivia questions in advance, I dont like the idea of the same team winning two months in a row

Looks like Rusty’s team got a discipline problem. Maybe that’s why we beat them at Simpsons Trivia nearly half the time…

This teams got a hot date… a date… dinner with friends… dinner alone… watching tv alone… ok ok, we’re gonna go to berry park general knowledge trivia. Buzz.. Simpsons trivia.. Ding. We don’t deserve this kinda shabby treatment

“You’re always trying to give me long trivia names. What is it with you?” “I just think they’re neat.”

Of course we could make the questions more challenging, but then the stupider teams will be in here furrowing their brow in a vain attempt to understand the situation

Your older, balder, fatter team

Das Trivia Team Ist Ein Nuisance Team!

We Wouldn’t Have Thought We Could Put a Price on Neil Arsenty’s Life, But Here We Are

Especially Lisa! But ESPECIALLY Bart

This one team seems to love the speedo man!

Chris, When You Participate in Simpsons Trivia, It’s Not Whether You Win or Lose, It’s How Drunk You Get

Family. Religion. Friendship. These are the 3 Demons You Must Slay if You Wish To Succeed In Trivia

Can I Borrow a Team Name?

Remember When We Went to Simpsons Trivia and We Forgot How to Drive? [I like this one a lot because I can imagine the whole bar saying, “That’s because you were drunk!” and the team responding, “And how!”]

Can I Have the Keys to the Car, Lover? I Want to Change Teams


The Seat Moisteners from Sector 7G

Evergreen Terrorist

Trivia Involves Being a Bit Underhanded, a Bit Devious, a Bit—as the French Say—Bartesque

We’re a Family Team. A Happy Family. Maybe Single People Play Trivia. We Don’t Know. Frankly, We Don’t Want To Know. That’s One Market We Can Do Without

I’m Sorry If You Heard Disneyland, But I Distinctly Said Simpsons Trivia Night

Too Crazy for Trivia Town, Too Much Trivia for Crazy Town!

Forwards, Not Backwards! Upwards, Not Forwards! And Always Twirling, Twirling, Twirling Towards First Place!

Excuse me, our team is also named Bort

The Bort Identity

Stupid team name. Be more funny!

The Team From Kua…Kual Lam…France!

It was the best of teams, it was the blurst of teams?!

Go Ahead, First Place Team… Enjoy Your Donuts. Little Do You Know You’re Getting Closer to the Poison Donut!

Team ‘You Know Who’ Playing The Secret ‘Wink Wink’ At The ‘You Know What’

You Don’t Win Trivia With Salad

The Following Answers Are Lies, But They’re Entertaining Lies, And Isn’t That The Real Truth? The Answer Is No.

Which Two of these Popular Trivia Team Members Died in the Last Year? If You Guessed Kelly and Brian, You’d Be Wrong. They Were Never Popular

And I Come Before You Good People Tonight with a team name. Probably the greatest… oh it’s not for you, It’s more of a Shelbyville Team Name

Don’t Make Me Run, I’m Full of Pizza

Die Team Die

Ah Yes. Shake it, Dan. Capital Knockers

They Said They Made the Team Themselves… from a Bigger Team

Stupid Teams Need The Most Attention

This Team Must Be Good. They Don’t Need A Lot of Players, Or Even Correct Spelling

Union Rule 26: This Team Must Win Trivia at Least Once Regardless of Gross Incompetence, Obesity or Rank Odor

Doesn’t This Team Know Any Songs That Aren’t Commericals?

“Oh, Simpsons Trivia, That’s Cool” “Are You Being Sarcastic, Dude?” “I Don’t Even Know Anymore”

You Want Us To Show This Question To The Cat, And Have The Cat Tell You What It Is? ’Cuz The Cat’s Going To Get It!

The Greatest Team Ever Hula’ed

A Shiny New Donkey For The Team That Brings Us The Head of Colonel Montoya

There Are Too Many Teams Nowadays. Please Eliminate Three.

Why Would They Come To Simpsons Trivia Just To Boo Us?

Only Who Can Win at Trivia? You Have Selected “You”, Referring to Our Team. The Correct Answer is “You”

Persephone? People Don’t Want Trivia Teams Named After Hungry Old Greek Broads

The Extra B is For BYOBB. What’s The Second B For? Best Team Ever!

On This Team, We Obey The Laws Of Thermodynamics!

The Team That Was Eventually Rescued By…Oh, Let’s Say Moe

Our Team is Hatless, Repeat, Hatless

I hate every Dan I see, from Dan Mulhall to Dan Ozzi, no you’ll never make a Daniel out of me!!

Posted in Humor, TV | 1 Comment

Fascinating result of the Intermediate Value Theorem

This is problem #1 from chapter 3.9 in Advanced Calculus: Theory and Practice, my introductory real analysis textbook at Western Michigan University:

Suppose that \(f\) is continuous on \(\left[0, 2\right]\) and \(f(0) = f(2)\). Prove that there exist \(x_1\), \(x_2 \in \left[0, 2\right]\) such that \(x_2 – x_1 = 1\) and \(f(x_1) = f(x_2)\).

Informally, this says there are two \(x\)-values exactly \(1\) unit apart whose \(f\) values are equal. This result isn’t all that obvious, and I liked it so much because it’s a great example of the type of abstract, theoretical result you learn to prove in mathematical analysis.

Recall that the Intermediate Value Theorem states that if \(f\) is a continuous function on an interval \(\left[a, b\right]\) and \(f(a) \neq f(b)\), then for every \(C\) between \(f(a)\) and \(f(b)\), there exists \(c \in (a, b)\) such that \(f(c) = C\). Often the Intermediate Value Theorem is stated as a specific case, where \( f(a) < 0 \) and \( f(b) > 0 \), in which case there exists \( c \in (a, b) \) such that \(f(c) = 0\). This is the case that will be relevant here. Now the solution to the problem:

Proof: Let \(g(x) = f(x+1) ~- f(x)\), defined on \(\left[0, 1\right]\). The function \(g\) is continuous, and $$g(0) = f(1) ~- f(0)$$ and $$g(1) = f(2) ~- f(1) = f(0) ~- f(1) = -g(0).$$

If \(g(0) = 0\), then \(f(0+1) ~- f(0) = 0\), so \(f(1) = f(0)\) and the solution is to take \(x_1 = 0\) and \(x_2 = 1\). If \(g(0) \neq 0\), then \(g(1)\) and \(g(0)\) are nonzero numbers of equal magnitude but opposite sign. By the Intermediate Value Theorem, there exists \(c \in (0, 1)\) such that \(g(c) = 0\). Now the solution is to define \(x_1 = c\) and \(x_2 = c+1\). This makes \(g(c) = f(c+1) ~- f(c) = f(x_2) ~- f(x_1) = 0\), so \(f(x_2) = f(x_1)\). \(\blacksquare\)

Posted in Math | Comments Off on Fascinating result of the Intermediate Value Theorem

Proofs of some trigonometric identities

Remember all those trigonometric identities in the front cover of your calculus book that were too hard to memorize and you didn’t have to anyway? Not the simple ones like \(\)\(\sin^2 x + \cos^2 x = 1\) or \(\tan^2 x + 1 = \sec^2 x\). I mean the angle-addition and -subtraction formulas and the like. We proved them pretty easily in my Advanced Calculus class at Western Michigan University, starting with some assumed knowledge of vector calculus. I don’t know what other ways there are to prove all of them, but this way starts with \(\cos (\alpha – \beta) \) and derives all of them from there.

\( \boldsymbol{ 1. \cos(\alpha – \beta) = \cos \alpha \cos \beta + \sin \alpha \sin \beta } \)

First, imagine two unit vectors with their tails at the origin and their heads on the unit circle. Vector \(\vec{a}\) makes an angle of \(\alpha\) with the horizontal axis and vector \(\vec{b}\) makes an angle of \(\beta\). Thus, the angle between them is \(\alpha – \beta\). And each vector written in component form is \(\vec{a} = \langle \cos \alpha, \sin \alpha \rangle \) and \(\vec{b} = \langle \cos \beta, \sin \beta \rangle \). Recall that their dot product is
\langle \cos \alpha, \sin \alpha \rangle \cdot \langle \cos \beta, \sin \beta \rangle &=& \| \vec{a} \| \| \vec{b} \| \cos(\alpha – \beta) \nonumber \\
\cos \alpha \cos \beta + \sin \alpha \sin \beta &=& 1 \cdot 1 \cdot \cos(\alpha – \beta) \nonumber \\

\( \boldsymbol{ 2. \cos (\alpha + \beta) = \cos \alpha \cos \beta – \sin \alpha \sin \beta\ } \)

The next one is easy because we can just replace \(\beta\) with \(-\beta\):
\cos(\alpha + \beta) &=& \cos(\alpha – (-\beta)) \nonumber \\
&=& \cos \alpha \cos (-\beta) + \sin \alpha \sin(-\beta) \nonumber \\
&=& \cos \alpha \cos \beta + \sin \alpha (-\sin \beta) \nonumber \\
&=& \cos \alpha \cos \beta – \sin \alpha \sin \beta \nonumber \\

That one relies on the fact that \(\cos (-\alpha) = \cos \alpha\) and \(\sin (-\alpha) = -\sin \alpha\), so…I guess you have to know that. The way we “proved” them is to simply draw an angle into the fourth quadrant that was the same magnitude as \(+\alpha\) and observe that the cosine (horizontal distance) was the same and the sine (vertical distance) was the same in magnitude but opposite in sign. I don’t know what other, more rigorous ways there are to prove that \( \cos(-\alpha) = \cos \alpha\) and \(\sin(-\alpha) = -\sin \alpha\), but there are only so many things you can spend time proving in a semester of Analysis.

Before we do the equivalent \(\sin\) identities, it is easiest to do the following two:

\( \boldsymbol{ 3. \cos \left(\frac{\pi}{2} – \alpha \right) = \sin \alpha } \)

\cos \left(\frac{\pi}{2} – \alpha \right) &=& \cos \frac{\pi}{2} \cos \alpha + \sin \frac{\pi}{2} \sin \alpha \nonumber \\
&=& 0 \cdot \cos \alpha + 1 \cdot \sin \alpha \nonumber \\
&=& \sin \alpha

\( \boldsymbol{ 4. \sin \left(\frac{\pi}{2} – \alpha \right) = \cos \alpha} \)

From #3, since the sine of an angle equals the cosine of \(\frac{\pi}{2} \) minus that angle, we can easily transform \(\sin \left(\frac{\pi}{2} – \alpha \right)\):
\sin \left(\frac{\pi}{2} – \alpha \right) &=& \cos \left(\frac{\pi}{2} – \left(\frac{\pi}{2} – \alpha \right) \right) \nonumber \\
&=& \cos \left(\frac{\pi}{2} – \frac{\pi}{2} + \alpha \right) \nonumber \\
&=& \cos \alpha

Now we can do the \(\sin\) angle-addition and angle-subtraction identities:

\( \boldsymbol{ 5. \sin(\alpha + \beta) = \sin \alpha \cos \beta + \sin \beta \cos \alpha} \)

\sin(\alpha + \beta) &=& \cos \left(\frac{\pi}{2} – (\alpha + \beta) \right) \nonumber \\
&=& \cos \left( \left(\frac{\pi}{2} – \alpha \right) – \beta \right) \nonumber \\
&=& \cos \left(\frac{\pi}{2} – \alpha \right) \cos \beta + \sin \left(\frac{\pi}{2} – \alpha \right) \sin \beta \nonumber \\
&=& \sin \alpha \cos \beta + \cos \alpha \sin \beta \nonumber \\

\( \boldsymbol{ 6. \sin(\alpha – \beta) = \sin \alpha \cos \beta – \sin \beta \cos \alpha} \)

\sin(\alpha – \beta) &=& \cos \left(\frac{\pi}{2} – (\alpha – \beta) \right) \nonumber \\
&=& \cos \left( \left(\frac{\pi}{2} – \alpha \right) + \beta \right) \nonumber \\
&=& \cos \left(\frac{\pi}{2} – \alpha \right) \cos \beta – \sin \left(\frac{\pi}{2} – \alpha \right) \sin \beta \nonumber \\
&=& \sin \alpha \cos \beta – \cos \alpha \sin \beta

And now we can do the double-angle identities:

\( \boldsymbol{ 7. \cos 2\alpha = \cos^2 \alpha – \sin^2 \alpha} \)
\cos 2 \alpha &=& \cos(\alpha + \alpha) \nonumber \\
&=& \cos \alpha \cos \alpha – \sin \alpha \sin \alpha \nonumber \\
&=& \cos^2 \alpha – \sin^2 \alpha

\( \boldsymbol{ 8. \sin 2\alpha = 2\sin \alpha \cos \alpha} \)
\sin 2\alpha &=& \sin(\alpha + \alpha) \nonumber \\
&=& \sin \alpha \cos \alpha + \sin \alpha \cos \alpha \nonumber \\
&=& 2\sin \alpha \cos \alpha

\( \boldsymbol{ 9. \cos \alpha – \cos \beta = -2 \sin\left(\frac{\alpha + \beta}{2}\right) \sin\left(\frac{\alpha – \beta}{2}\right)} \)
\cos \alpha – \cos \beta = & ~ \cos \left(\frac{\alpha + \beta}{2} + \frac{\alpha – \beta}{2}\right) – \cos \left(\frac{\alpha + \beta}{2} – \frac{\alpha – \beta}{2}\right) \nonumber \\
= & ~ \cos\left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) – \sin \left(\frac{\alpha + \beta}{2}\right) \sin \left(\frac{\alpha – \beta}{2}\right) – \\ & \left[\cos \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) + \sin \left(\frac{\alpha + \beta}{2}\right) \sin \left(\frac{\alpha – \beta}{2}\right) \right] \nonumber \\
= & ~ -2 \sin\left(\frac{\alpha + \beta}{2}\right) \sin\left(\frac{\alpha – \beta}{2}\right)

\( \boldsymbol{ 10. \cos \alpha + \cos \beta = 2 \cos\left(\frac{\alpha + \beta}{2}\right) \cos\left(\frac{\alpha – \beta}{2}\right)} \)
\cos \alpha + \cos \beta = & ~ \cos \left(\frac{\alpha + \beta}{2} + \frac{\alpha – \beta}{2}\right) + \cos \left(\frac{\alpha + \beta}{2} – \frac{\alpha – \beta}{2}\right) \nonumber \\
= & ~ \cos\left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) – \sin \left(\frac{\alpha + \beta}{2}\right) \sin \left(\frac{\alpha – \beta}{2}\right) + \\ & \cos \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) + \sin \left(\frac{\alpha + \beta}{2}\right) \sin \left(\frac{\alpha – \beta}{2}\right) \nonumber \\
= & ~ 2 \cos\left(\frac{\alpha + \beta}{2}\right) \cos\left(\frac{\alpha – \beta}{2}\right)

\( \boldsymbol{ 11. \sin \alpha – \sin \beta = 2 \sin \left(\frac{\alpha – \beta}{2}\right) \cos \left(\frac{\alpha + \beta}{2}\right)} \)
\sin \alpha – \sin \beta = & ~ \sin \left(\frac{\alpha + \beta}{2} + \frac{\alpha – \beta}{2}\right) – \sin \left(\frac{\alpha + \beta}{2} – \frac{\alpha – \beta}{2}\right) \\
= & ~ \sin \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) + \sin \left(\frac{\alpha – \beta}{2} \right) \cos \left(\frac{\alpha + \beta}{2}\right) – \\
& ~ \left[ \sin \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) – \sin \left(\frac{\alpha – \beta}{2}\right) \cos \left(\frac{\alpha + \beta}{2}\right) \right] \\
= & ~ 2 \sin \left(\frac{\alpha – \beta}{2}\right) \cos \left(\frac{\alpha + \beta}{2}\right)

\( \boldsymbol{ 12. \sin \alpha + \sin \beta = 2 \sin \left(\frac{\alpha – \beta}{2}\right) \cos \left(\frac{\alpha + \beta}{2}\right)} \)
\sin \alpha + \sin \beta = & ~ \sin \left(\frac{\alpha + \beta}{2} + \frac{\alpha – \beta}{2}\right) + \sin \left(\frac{\alpha + \beta}{2} – \frac{\alpha – \beta}{2}\right) \\
= & ~ \sin \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) + \sin \left(\frac{\alpha – \beta}{2} \right) \cos \left(\frac{\alpha + \beta}{2}\right) + \\
& ~ \sin \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right) – \sin \left(\frac{\alpha – \beta}{2}\right) \cos \left(\frac{\alpha + \beta}{2}\right) \\
= & ~ 2 \sin \left(\frac{\alpha + \beta}{2}\right) \cos \left(\frac{\alpha – \beta}{2}\right)

Posted in Math | Comments Off on Proofs of some trigonometric identities

Proof that the limit as n approaches infinity of n^1/n = 1 (\(\lim_{n \to \infty} n^{1/n} = 1\))

\(\)Here’s an important limit from real analysis that gives quite a few people, including myself, a lot of trouble:
\lim_{n \to \infty}n^{1/n} = 1

Here is the proof that my Advanced Calculus professor at Western Michigan University, Yuri Ledyaev, gave in class. It uses the binomial expansion.

Proof: Since \(n \in \mathbb{N} \), for all \(n \geq 2\) we can write
n^{1/n} &=& 1 + \alpha ~ [where ~ \alpha \geq 0] \nonumber \\
(n^{1/n})^n &=& (1 + \alpha)^n \nonumber \\
n &=& (1 + \alpha)^n \nonumber \\

We want to estimate \( \alpha \). If \( \alpha\) is, say, \(0\), then we’ll have \(n^{1/n} = 1+0\), meaning the limit we’re after will be \(1\). The binomial theorem says that
(a+b)^n &=& a^n + na^{n-1}b^1 + \frac{n(n-1)}{2}a^{n-2}b^2 + … +b^n \nonumber \\
&=& \sum\limits_{k=0}^n \binom{n}{k} a^{n-k}b^k\nonumber \\
(1+\alpha)^n &=& 1^n + \binom{n}{1}1^{n-1}\alpha + \binom{n}{2}1^{n-2}\alpha^2 + … +\alpha^n \\[3pt] \nonumber \\
&=& 1^n + n \cdot 1 \cdot \alpha + \frac{n(n-1)}{2} \cdot 1 \cdot \alpha^2 + … + \alpha^n\\[3pt] \nonumber \\
&=& 1 + n\alpha + \frac{n(n-1)}{2}\alpha^2 + … +\alpha^n \\[3pt] \nonumber \\
&>& 1 + n\alpha + \frac{n(n-1)}{2}\alpha^2 \\[3pt] \nonumber \\
&>& 1 + \frac{n(n-1)}{2}\alpha^2 \nonumber \\

So we have
1+\frac{n(n-1)}{2}\alpha^2 &<& (1+\alpha)^n = n \\[3pt] \nonumber \\ \frac{n(n-1)}{2}\alpha^2 &<& n - 1 < n \\[3pt] \nonumber \\ \alpha^2 &<& \frac{n}{\frac{n(n-1)}{2}} \\[3pt] \nonumber \\ \alpha^2 &<& \frac{2}{n-1} \\[3pt] \nonumber \\ \alpha &<& \sqrt{\frac{2}{n-1}} \nonumber \\ \end{eqnarray} $$ Thus, \(\lim_{n \to \infty}\alpha = 0 \), and \(\lim_{n \to \infty}n^{1/n} = \lim_{n \to \infty}(1+\alpha) = 1+0=1\). \(\blacksquare\)

Posted in Math | 1 Comment

Interesting limit from real analysis: lim n!/n^n

\(\)In my Advanced Calculus (introductory real analysis) course at Western Michigan University, Dr. Ledyaev gave us this limit as a bonus homework problem to turn in:

\lim_{n \to \infty}\frac{n!}{n^n} = ~?

The answer is that the sequence converges and its limit is 0. Here is how I showed this:

Claim: The sequence \( \{a_n: a_n = \frac{n!}{n^n}\}\) is monotonically decreasing and bounded below.

Proof: Note that all terms of both the numerator \(n!\) and the denominator \(n^n\) are positive for all \(n \in \mathbb{N}\), so \(\{a_n\}\) is bounded below (by \(0\)). To determine whether the sequence is increasing or decreasing, we can use the ratio test:
\frac{a_{n+1}}{a_n} &=& \frac{\frac{(n+1)!}{(n+1)^{n+1}}}{\frac{n!}{n^n}} \\[3pt] \nonumber \\
&=& \frac{(n+1)!}{(n+1)^{n+1}} \cdot \frac{n^n}{n!} \\[3pt] \nonumber \\
&=& \frac{n!(n+1)}{(n+1)^{n+1}} \cdot \frac{n^n}{n!} \\[3pt] \nonumber \\
&=& \frac{n+1}{(n+1)(n+1)^n} \cdot n^n \\[3pt] \nonumber \\
&=& \frac{1}{(n+1)^n} \cdot n^n \\[3pt] \nonumber \\
&=& \frac{n^n}{(n+1)^n} < 1 \nonumber \\ \end{eqnarray} $$ The ratio \(\frac{a_{n+1}}{a_n} < 1\) for all \(n\), meaning the sequence is monotonically decreasing. Since it is bounded below and monotonically decreasing, it converges to a limit. $$\tag*{$\blacksquare$}$$ Claim: \( \lim_{n\rightarrow\infty} \frac{n!}{n^n} = 0. \)

Proof: Since \( \{a_n\} \) converges to a limit, call the limit \(L\). An important theorem states that \(\{a_{n+1}\}\) also converges to \(L\). From above,
\frac{a_{n+1}}{a_n} &=& \frac{n^n}{(n+1)^n} \\[3pt] \nonumber \\
{a_{n+1}} &=& \frac{n^n}{(n+1)^n} \cdot {a_n} \\[3pt] \nonumber \\

And using the product rule for limits,
\lim_{n \to \infty}{a_{n+1}} &=& \lim_{n \to \infty}\frac{n^n}{(n+1)^n} \cdot \lim_{n \to \infty}{a_n} \\[3pt] \nonumber \\
L &=& \lim_{n \to \infty}\frac{n^n}{(n+1)^n} \cdot L \nonumber \\

If we can show that \( \frac{n^n}{(n+1)^n} \) converges to some real number \( r \), then we will have \( L = r \cdot L \). If \( r \neq 1 \), then the only solution to that equation is \( L = 0 \). In fact, we will show that \( \frac{n^n}{(n+1)^n} \) converges to \( \frac{1}{e} \). Observe,

\frac{n^n}{(n+1)^n} = \left(\frac{n}{n+1}\right)^n = \left(\frac{1}{\frac{n+1}{n}}\right)^n = \left(\frac{1}{1+\frac{1}{n}}\right)^n = \frac{1}{\left(1+\frac{1}{n}\right)^n}

Recall that \( \lim_{}(1+\frac{1}{n})^n = e \), so by the quotient rule for limits, \( \lim{}\frac{1}{(1+\frac{1}{n})^n} = \frac{1}{e} \). Substituting \( \frac{1}{e} \) for \( r \) above, we have \( L = \frac{1}{e} \cdot L \), so \( L = \lim_{}a_n = 0 \). $$\tag*{$\blacksquare$}$$

Posted in Math | Comments Off on Interesting limit from real analysis: lim n!/n^n

Cool theorem about midpoints and parallel vectors from multivariable calculus

This is a cool theorem from multivariable calculus that my professor at Western Michigan University, Steve Mackey, showed us during lecture one day early in the semester.

Theorem: Let \(a\), \(b\), \(c\), and \(d\) be any four points in \(\mathbb{R}^{3}\). Let \(M\), \(N\), \(P\), and \(Q\) be the midpoints between the adjacent points. Then vector MQ must always be identical to vector NP.

I have inserted a picture to help visualize it. I made the picture in PowerPoint, so it’s about as accurate as a hand-drawn picture.

It looks like the points are all in the same plane, but that’s just because it’s easier to draw them that way. They can be any four points in three-dimensional space.

Proof: Start with vector MQ. By the basic rules of vector addition,

MQ = MA + AQ
= 1/2 BA + 1/2 AD
= 1/2 (BA + AD)
= 1/2 BD

Now do the same with vector NP:

NP = NC + CP
= 1/2 (BC + CD)
= 1/2 BD

Thus, MQ = NP = 1/2 BD. ■

The reason this is so cool is because it holds true for any four points in \(\mathbb{R}^{3}\), which makes it very unexpected. You can rearrange the names of the points and midpoints so that the vector labeled MQ isn’t identical to the vector labeled NP, but then two other vectors will be identical. The point is that given any four points in three-dimensional space, some pair of midpoint-connecting vectors will be identical.

After I wrote this post, I realized something that Dr. Mackey didn’t mention (or at least, I didn’t write in my notes): two other vectors must also be identical. Can you show which ones?

Posted in Math | Comments Off on Cool theorem about midpoints and parallel vectors from multivariable calculus

For some reason, I really liked The Host

Last weekend Kathy and I watched the movie The Host on Netflix. It’s based on the novel by Stephenie Meyer, whose name I just found out has no A’s in it. This movie is yet another example of why you (or at least I) shouldn’t read other people’s opinions of a movie, TV show, or book, or even peek at the average rating at a place like, IMDb, or Rotten Tomatoes, before checking it out yourself. Luckily, I didn’t, so I had no idea how down on this movie most people were, even though I knew it was based on a Stephenie Meyer novel that Kathy quit reading early on and that her friend finished but disliked. Sometimes a low rating can lower your expectations so much that you enjoy it more than you expect, but other times it can make you expect badness and notice it more acutely than you might have. This is especially true if you read negative reviews first and hear what specific criticisms people have.

I love science fiction more than any other genre, whereas Kathy couldn’t even finish Hyperion. (I mean, seriously, Hyperion! An all-time masterpiece of science fiction! Everyone should like that! At least she liked Ender’s Game, though I still haven’t been able to talk her into reading Speaker for the Dead.) Even so, this movie was her choice. We tend to take turns choosing what we watch, and she chose The Host this time. For obvious, Stephenie Meyer–related reasons, this was more of a “her” movie in our Netflix queue, though given its premise and the fact that it is science fiction and not fantasy, it should have been a movie that I’d be expected to like more than she would. It’s kind of funny, though, and a good thing, that we both end up liking most of the movies that are chosen by only one of us. Some recent examples are Moneyball (mine), What To Expect When You’re Expecting (hers), and The Perks of Being a Wallflower (hers). A good example of a movie only the chooser liked is The Messenger (mine). Oddly enough, I think two other Saoirse Ronan movies were liked less by the chooser than the other person: I didn’t think The Lovely Bones (hers) was all that bad, though I certainly have no desire to watch it again or buy it; and she might have liked Hanna (mine) a little more than I did, though I don’t think either of us will want to see it again. I also seem to remember Kathy liking Rare Exports (mine) more than I did. In these three cases and possibly others I can’t remember, the chooser’s relative dislike of a movie was probably related to their high expectations, which was why they chose it. See? Always have low expectations!

Many movie critics and fans are probably still waiting for the deeply talented Saoirse Ronan to headline a high-quality movie, but I think The Host fits the bill. I understand the widespread criticism that the movie is slow, plodding, and low on action, but I was not bored or indifferent during a single scene. It isn’t an action-packed movie, but I think that’s fine because that’s not what it was meant to be and not what it needs to be. The movie didn’t feel too long or dragged out at all.

A second common criticism is Stephenie Meyer–related: the love rectangle is not compelling, it’s too young adult-y and teenage girl-y, it’s too infected with Nicholas Sparks sappiness, and the two boys are not given enough depth or characterization to make us feel strongly about it. I also understand this criticism but disagree with it more strongly than with the first one. I didn’t think it was too pandering to a teenage-girl audience; I merely thought it was depicting what a teenage girl in Melanie’s situation might go through. I should mention that the four characters in this love rectangle are Melanie, the alien that inhabits her and controls her body (“Wanderer”), and the two aforementioned boys. I did think that Melanie’s reasons for wanting Wanderer to do this and not wanting Wanderer to say that to the two boys, as well as her brother and uncle, were not explained and fleshed out as fully they could have been, causing a little frustration and confusion in me, but this was the only aspect of the movie I found frustrating.

The third common criticism I encountered in reading the reviews after I saw the movie was that the dialog between Melanie (from inside her own mind) and Wanderer (using Melanie’s actual voice) was unintentionally funny and atrociously written. I strongly disagree. I’m no professional movie critic and know nothing about how to write movie dialog, but I found the dual-personality aspect of Melanie/Wanderer well written and expertly performed. I thought Ronan’s acting, the script, and the directing perfectly depicted the conflicted nature of a mind struggling to assert itself—to exist—and an alien struggling to justify its actions and reconcile them with its sense of morals. Other than the overall science-fiction storyline, Ronan’s portrayal of this inner struggle was the highlight of the movie for me. But maybe it could have been even better if that struggle was less about boys and more about deeper ethical and psychological issues.

The main reason I’m even writing this post, now 800-plus words in, is to respond to a truly vacuous, clueless, bafflingly stupid statement by Claudia Puig in her review of the movie for USA Today. The premise of The Host is that an alien species has invaded and populated the Earth by taking over our bodies and our minds. When an alien does this, the human host is effectively killed; their mind ceases to function if not exist altogether, and the body is controlled by the alien. The alien can access all of the host’s memories, which is especially useful for finding rebels who would prefer not to be killed and their species exterminated. The thing is, a rare human will have a strong enough psyche to rebel against its possessor and stay alive, as Melanie does. Usually when this happens, the other aliens just remove their comrade and kill the rebelious host or possess the host with a stronger, more ruthless alien. Only a few small pockets of living humans remain, in hiding or on the run. But according to Claudia Puig, these rebelious hosts and the insurgents who have avoided parasitization altogether are being irrational and primitive, because look at all the progress the aliens have created!

Like Twilight, the action is slowed by too many dull-eyed stares meant to be smoldering. A bigger problem is that the aliens are an exceedingly pleasant bunch who have rid the world of its problems. What’s not to like? The human rebellion comes off like a bunch of hillbillies angry for no justifiable reason.

I’ll repeat that in case your mind was too blindsided and dumbfounded by such idiocy to process it: An alien race wants to exterminate the human race and is damn close to doing it, and the humans who resist this eventuality are “hillbillies” who are “angry for no justifiable reason.” It boggles the mind. One is liable to sit agape in horror and depression at the psyche that could conjure such an opinion—at the types of real-world leaders, ideas, and solutions Claudia Puig would endorse and the horrors we would have to inflict upon our fellow humans to achieve her ideal order. It’s like she perceives “progress” and “peace” as some nearly tangible, identifiable things that have value on their own and should be strived for at all costs, regardless of who is doing the striving and who is benefitting from them. She must have had the same scornful reaction to all that pesky resistance the Borg face from all those hillbilly humanoids who like their species they way they are. She must not have objected to the Borg’s assimilation of the human race at the beginning of Star Trek: First Contact and must have been equally annoyed and confused at the Enterprise for going back in time and foolishly trying to stop it. There is literally no difference between the Borg and the aliens of The Host, except superficially. I never thought I could lose all respect for someone as a person from reading a mere movie review, but I never thought I’d read anything so contradictory, so insulting, to rational thought in a mere movie review.

Posted in Movies | Comments Off on For some reason, I really liked The Host

Probability problem from Star Trek: The Next Generation

In the first episode of season 7 of Star Trek: TNG, “Descent, part II”, a certain character (no spoilers from me!) tells another character that a medical experiment has a 60% chance of failing, meaning it will kill the subject. But, this evil character says, since he has three captives to perform the experiment on, “the odds are that at least one of the procedures will be successful.”

Is he right? Is there a >50% chance that at least one of the procedures will be successful? With a 40% chance of succeeding and three trials to get it right, it seems obvious at an intuitive level that at least one of them will succeed. But because I last watched this episode shortly after my first semester of Statistics, I thought it’d be fun to calculate the exact probability that at least one of the procedures will be successful.

From introductory Statistics, we can see that this is a relatively simple binomial experiment, with \(p\) (the probability of success) \(= .4\) and \(n = 3\). As is often the case when you need to calculate the probability that something will happen at least once, it is easiest to calculate the probability that it won’t happen, and subtract that from \(1\).


\(P(all~three~procedures~fail) = .6^3 = .216. \\
P(at~least~one~procedure~succeeds) = 1 – .216 = .784\).

There are two other ways to compute this probability. Hopefully, they yield the same result!

From an important binomial probability theorem,

\(b(x; n, p) = {n \choose x} p^x (1 – p)^{n-x}\)

where \(b\) is the probability mass function (pmf) of a binomial experiment, meaning the probability of a single outcome (as opposed to the cumulative density function, which measures the collective probability of multiple outcomes), \(x\) is the number of successes, \(n\) is the total number of trials, and \(p\) is the probability of success. The notation \({n \choose x}\) is pronounced “n choose x” and means the total number of ways to choose \(x\) outcomes out of \(n\) possible outcomes. This is a good introduction to combinations (and permutations).

First, let’s use the binomial pmf to calculate the probability of zero survivors among the three procedures:

\(b(0; 3, .4) = {3 \choose 0} (.4)^0 (1 – .4)^3 = .216\)

As it turns out in this simple example, the above computation is just \(1\cdot 1\cdot .6^3\), so basically the same as the original high-school-level computation we did first. I’ll go out on a limb and assume that subtracting this from \(1\) will give the same result as it did above.

We can also use that binomial pmf to calculate the probability that one procedure will succeed plus the probability that two will succeed plus the probability that all three will succeed. This calculation would ignore the reality that the evil experimenter will stop after the first success, but to calculate the probability that at least one procedure will succeed, we need to include all three of them.

\(b(1; 3, .4) + b(2; 3, .4) + b(3; 3, .4) \\
= {3 \choose 1} (.4)^1 (1 – .4)^2 + {3 \choose 2} (.4)^2 (1 – .4)^1 + {3 \choose 3} (.4)^3 (1 – .4)^0 \\
= .432 + .288 + .064 = .784\)

I know of one final way to calculate the probability that at least one procedure will succeed: use the TI-83’s binomcdf function. It is located under the DISTR menu, which is the 2nd option on the VARS key. The syntax is


and this tells you the cumulative probability of all outcomes in a binomial experiment from \(0\) to \(x\) successes. In this case, we are interested in the cumulative probability from \(x=1\) to \(x=3\), not \(x=0\) to \(x=3\). Therefore, in the TI-83 we can type either

\(binomcdf(3,.4,3) – binomcdf(3,.4,0)\)
\(binomcdf(3,.4,3) – binompdf(3,.4,0)\)

Both commands tell us the cumulative probability of zero successes through three successes minus the probability of zero successes, and both give \(.784\).

So we can see that our common-sense intuition was right: with a 40% chance of success, the chances are very favorable that at least one of the first three trials will produce a success.

At what point does the probability of success surpass 50%? My guess is two trials. This can be easily confirmed by changing \(n\) from \(3\) to \(2\) and calculating the binomial probability:

\(P(getting~at~least~one~success~out~of~the~first~two~trials) \\
= b(2; .4, 2) + b(2; .4, 1) = {2 \choose 2} .4^2 .6^0 + {2 \choose 1} .4^1 .6^1 \\
= .64 \\
(= 1 – b(2; .4, 0) = 1 – .36 = .64)\)

Another, more high-school-ish way to verify the probability of succeeding within the first two trials is to realize there are only two ways this could happen: succeed on the first trial, or fail on the first trial and succeed on the second:

\(P(succeed~on~the~first~trial) + P(fail~first~and~then~succeed)\\
= .4 + .6\cdot .4\\
= .64\)

Another thing our evil experimenter might be interested in is the expected value of the number of captives he will need to achieve success. Expected value is basically a weighted average. This is a good beginner’s summary of expected value. One of the first things that strikes any Statistics/Probability student about expected value is that you should hardly ever actually expect to get the expected value in an experiment, because often the expected value is impossible to achieve. For instance, your experiment only produces integer outcomes, but the expected value, being a (weighted) average, is a decimal. This is the case with many binomial experiments. The number of captives our evil experimenter will perform the procedure on is \(1\), \(2\), or \(3\), but I bet the expected value of this binomial experiment will be between \(1\) and \(2\).

The definition of expected value as a weighted average is more apt for random variables than binomial variables, but you can still calculate expected value for binomial distributions. In fact, in this case we can calculate two different expected values.

First, the simple, standard expected value of a binomial distribution: \(E(X) = np\). That is, the expected number of successes from \(n\) trials is \(n\) times the probability of success. Pretty simple, huh? So

\(E(X) = np = 3\cdot .4 = 1.2\)

So if he performed the procedure on all three captives, he should expect \(1.2\) successes. Similarly, the expected number of successes after the first two trials is \(.8\), and the expected number of successes after the first trial is \(.4\).

But that’s not the expected value I originally referred to. I said the experimenter might be interested in the expected number of procedures he’d have to perform to reach one successful procedure. I can’t find any definitive theorem or formula that tells how to calculate such an expected value in my Statistics textbook or the few places I’ve looked online, but I think it’s this:

\(1 = n(.4) \\
1/.4 = n\\
2.5 = n\)

In other words, since each experiment has a \(.4\) chance of succeeding, how many experiments do you expect to need to reach \(1\) success? What times \(.4\) equals \(1\)? It’s \(2.5\).

That’s higher than I expected. That was the number I expected to be between \(1\) and \(2\). This seems incongruent with our result above that the probability of success surpasses 50% after two trials. If the probability of success becomes better than even after two trials, shouldn’t you expect to reach one success in \(\leq 2\) trials? And shouldn’t the expected number of successes after two trials be something greater than \(1\), instead of \(.8\), then? I know both sets of calculations are correct, so this is either one of those counterintuitive results you often get in probability, or I’m framing one of the questions wrong…

Posted in Math, TV | 1 Comment