Sunday, April 26, 2020

Introduction to Coding

Table of Contents

1. Introduction
2. Reblogging -- and Finishing -- BlooP and FlooP and GlooP
3. Mocha & BASIC
4. The Logo Language
5. The Pascal Language
6. The TI-BASIC Language
7. C++ and Moving Forward with Java
8. A Few Math Problems
9. What About FlooP and GlooP?
10. Conclusion

Introduction

This is my 1100th post. And it is one of my most important posts to date, because this post will determine the future -- not just of my blog, but of my career.

As I mentioned before, as the school closures grow longer, I'm very concerned with what effect the coronavirus will have on education. My biggest fear is that many students and parents will choose to avoid attending school even in the fall. They might decide that the future of education is online. And if that happens, there'll be no need for subs like me -- and less need even for regular teachers. All that will be needed for education are computers.

On Friday, my stimulus check from the federal government arrived. But, as we all know, this will provide me with only temporary relief. If the schools don't reopen in the fall because parents and students are still worried about the virus, I'm going to need another source of income.

And that's what this post is all about. I'm seriously considering a career change. After all, teaching may a thing of the past, while computers are definitely a thing of the future. And if I want to be hired as a coder, I need to raise my coding skills.

Keep in mind that this is a Geometry blog, not a coding blog. Thus if at any point I'm hired for a computer (or any other non-teaching) job full-time, then that's the end of this blog. This blog might not make it to a 1200th post.

But until that actually happens, I'll keep blogging. As long as it's theoretically possible that I'll see the inside of a classroom again, this blog will remain.

In the meantime, I'll blog about my journey towards a possible career in computers. I believe that simply writing about it makes me more likely to remain focused on it. By blogging, there's someone out there who's holding me accountable -- you readers of this blog. That way, I'll keep focused as I gradually work towards my goal. And who knows -- maybe some of you readers are also teachers who are considering a new career in the wake of the virus outbreak.

Today's post is an introduction to the world of coding. It's all about exactly what computer language I will choose to learn independently -- which one is most likely to get me hired.

Reblogging -- and Finishing -- BlooP and FlooP and GlooP

It seems as if at least part of every post these days includes reblogging previous posts. Actually, this was always true even before the virus shutdown -- nearly every Geometry lesson is the same as that from the previous year, and so I'd usually reblog it. But now that schools are closed, I've been reblogging the non-Geometry parts of my old posts.

It turns out that one year ago today we were still in the middle of our side-along reading of Hofstadter's book. And it just so happens that the chapter for April 26th was on the three computer languages that Hofstadter invented -- BlooP and FlooP and GlooP.

Hey, that fits today's post, which is all about computer languages, like a glove! So why don't we go back and reblog that particular chapter?

Indeed, last year I wrote very little about the chapter since I'd spent too much time on the previous "Dialogue" (which was all about the Collatz conjecture). So this is my opportunity, at long last, to complete that chapter. (Therefore very little of this post will actually be reblogging.)

And in doing so, I'll introduce you to computer languages (for those of you not familiar). Once again, BlooP and FlooP and GlooP are not actual computer languages -- no business is going to hire me to become a FlooP coder. But at least I'll be able to compare the Hofstadter languages to real languages that work on real computers.

Without further ado, let's begin. This is what I wrote last year about this chapter:

Chapter 13 of Douglas Hofstadter's Godel, Escher, Bach is called "BlooP and FlooP and GlooP." It begins as follows:

"BlooP, FlooP, and GlooP are not trolls, talking ducks, or the sounds made by a sinking ship -- they are three computer languages, each one with its own special purpose. These languages were invented specially for this Chapter."

I'm sorry, but it just feels so weird to be reading this book again a year later. I've forgotten that Hofstadter regularly mentions two characters -- Achilles and the Tortoise -- from Zeno's paradox:

"The Tortoise can only make a record called 'I Cannot Be Played on Record Player X' provided that Record Player X is really a record player!"

Now I remember -- the crafty Tortoise was always trying to break record players! Meanwhile, the author is trying to teach his readers about formal systems:

"We saw at an early stage that formal systems can be difficult and unruly beasts because they have lengthening and shortening rules, which can possibly lead to never-ending searches among strings."

Then Hofstadter suddenly quotes another Dialogue, from JM Jauch. Here one character, Salvati, is trying to explain to Sagredo that the series 1 - 1/3 + 1/5 - 1/7 + 1/9 - ... converges to pi/4 = 0.78539...:

Sacredo: This is a marvelous idea! It suggests that when we try to understand nature, we should look at the phenomena as if they were messages to be understood.

Let's just hurry up and get to BlooP, since that's what I'm trying to focus on here:

"Our topic will be searches for natural numbers which have various properties. In order to talk about the length of any search, we shall have to define some primordial steps, out of which all searches are built, so that length can be measured in terms of numbers of steps. Some steps which we might consider primordial are:

adding any two natural numbers;
multiplying any two natural numbers;
determining if two numbers are equal;
determining the larger (smaller) of two numbers.

"If we try to formulate a test for, say, primality in terms of such steps, we shall soon see that we have to include a control structure -- that is, descriptions of the order to do things in, when to branch back and try something again, when to skip over a set of steps, when to stop, and similar matters."

I will write Hofstadter's BlooP example:

DEFINE PROCEDURE "TWO-TO-THE-THREE-TO-THE" [N]:
BLOCK 0: BEGIN
     CELL(0) <= 1;
     LOOP N TIMES:
     BLOCK 1: BEGIN
          CELL(0) <= 3 X CELL(0);
     BLOCK 1: END;
     CELL(1) <= 1;
     LOOP CELL(0) TIMES:
     BLOCK 2: BEGIN
          CELL(1) <= 2 X CELL(1);
     BLOCK 2: END;
     OUTPUT <= CELL(1);
BLOCK 0: END.

The author now explains how this code works:

"However, I hope that this algorithm is simple enough that it makes sense without too much scrutiny. A procedure is defined, having one input parameter, N; its output is the desired value.

"The strategy of the above algorithm is as described earlier. You begin by taking an auxiliary variable, called CELL(0); you set it initially to be , and then, in a loop, you multiply it repeatedly by 3 until you've done so exactly N times. Next, you do the analogous thing for CELL(1) -- set it to 1, multiply by 2 exactly CELL(0) times, then quit. Finally, you set OUTPUT to be the value of CELL(1). This is the value returned to the outside world -- the only externally visible behavior of the procedure.

"A number of points about the notation should be made here. First, the meaning of the left-arrow '<=' is this:

Evaluate the expression to its right, then take the result and set the CELL (or OUTPUT) on its left to that value.

"Every procedure in BlooP, when called, yields a value -- namely the value of the variable called OUTPUT. At the beginning of execution of any procedure, it is assumed as a default option that OUTPUT has the value 0. That way, even if the procedure never resets OUTPUT at all, OUTPUT has a well-defined value at all times."

Hofstadter's next BlooP example is subtraction -- only addition is primordial, so how do we subtract?

DEFINE PROCEDURE "MINUS" [M,N]:
BLOCK 0: BEGIN
     IF M < N THEN:
     QUIT BLOCK 0;
     LOOP AT MOST M + 1 TIMES:
     BLOCK 1: BEGIN
          IF OUTPUT + N = M, THEN:
          ABORT LOOP 1;
          OUTPUT <= OUTPUT + 1;
     BLOCK 1: END;
BLOCK 0: END.

The author now explains how this code works:

"Here we are making use of the implicit feature that OUTPUT begins at 0. If M is less than N, then subtraction is impossible, and we simply jump to the bottom of BLOCK 0 right away, and the answer is 0. That is what is meant by the line QUIT BLOCK 0. but if M is not less than N, then we skip over that QUIT-statement, and carry out the next command in sequence (here, a LOOP-statement). That is how IF-statements always work in BlooP."

We've discussed IF-statements on the blog before. That's because they're featured in Lesson 2-3 of the U of Chicago text.

"Notice that there are two distinct instructions for jumping downwards: QUIT and ABORT. The former pertains to blocks, the latter to loops. QUIT BLOCK n means to jump to the last line of BLOCK n, whereas ABORT LOOP n means to jump just below the last line of BLOCK n. This distinction only matters when you are inside a loop and want to continue looping but to quit the block this time around. Then you can say QUIT and the proper thing will happen."

We learn that BlooP supports automatic "chunking," and this is explained by an analogy:

"You might compare it to the way a good ice skater acquires new motions: not by defining them as long sequences of primordial muscle-actions, but in terms of previously learned motions, which were themselves learned as compounds of earlier learned motions, etc. -- and the nestedness, or chunkedness, can go back many layers until you hit primordial muscle-actions."

Hofstadter's next BlooP example is on testing for primality:

DEFINE PROCEDURE "PRIME?" [N]:
BLOCK 0: BEGIN
     IF N=0 THEN:
     QUIT BLOCK 0;
     CELL(0) <= 2;
     LOOP AT MOST MINUS [N,2] TIMES:
     BLOCK 1: BEGIN
          IF REMAINDER [N,CELL(0)] = 0, THEN:
          QUIT BLOCK 0;
          CELL(0) <= CELL(0) + 1;
     BLOCK 1: END;
     OUTPUT <= YES;
BLOCK 0: END.

The author now explains how this code works:

"Notice that I have called two procedures inside this algorithm: MINUS and REMAINDER. (The latter is presumed to have been previously defined, and you may work out its definition yourself.) Now this test for primality works by trying out potential factors of N one by one, starting with 2 and increasing to a maximum of N - 1. In case any of them divides N exactly (i.e, gives remainder 0), then we jump down to the bottom, and since OUTPUT still has its default value at this stage, the answer is NO. Only if N has no exact divisors will it survive the entirety of LOOP 1; then we will emerge smoothly at the statement OUTPUT <= YES, which will get executed, and then the procedure is over."

So far all we've seen are procedures. But, as we soon find out, whole BlooP programs contain chains of procedures:

"Thus, an example of a full BlooP program would be the definition of the procedure TWO-TO-THE-THREE-TO-THE, followed by the call

TWO-TO-THE-THREE-TO-THE [2]

which would yield an answer of 512."

Hofstadter's next BlooP example is on testing for Goldbach's conjecture:

DEFINE PROCEDURE "GOLDBACH?" [N]:
BLOCK 0: BEGIN
     CELL(0) <= 2;
     LOOP AT MOST N TIMES:
     BLOCK 1: BEGIN
          IF {PRIME? [CELL(0)]
            AND PRIME? [MINUS [N,CELL(0)]]}
              THEN:
          BLOCK 2: BEGIN
               OUTPUT <= YES;
               QUIT BLOCK 0;
          BLOCK 2: END
          CELL(0) <= CELL(0) + 1;
      BLOCK 1: END;
BLOCK 0: END.

We discussed Goldbach's conjecture in Lesson 2-2 of the U of Chicago text.

At this point we'll stop reading Hofstadter's Chapter 13, since his purpose for discussing BlooP (and FlooP and GlooP) is to explain with a primitive recursive function is -- basically, it's a function that can be computed by BlooP. Instead, let's compare BlooP to some other languages that I've coded in.

Mocha & BASIC

The first language I've ever coded in was, of course, BASIC. I was about four or five years old when I received my first computer -- the one that the Mocha emulator is based on. I believe that I only did a little programming in BASIC that year. I moved to a different apartment just before my kindergarten year, and the computer remained packed away until at least a year after the move. Thus I believe that I was in either first or second grade when I began learning BASIC in earnest.

Because BASIC is mentioned in the U of Chicago text -- and because I often link to Mocha from this blog -- you readers should be somewhat familiar with it. So let's try converting some of our BlooP programs to BASIC. (Yes -- I'm actually coding in Mocha for something other than music.)

https://www.haplessgenius.com/mocha/

We'll start with our "two to the three to the" program:

10 INPUT N
20 A=1
30 FOR X=1 TO N
40 A=3*A
50 NEXT X
60 B=1
70 FOR X=1 TO A
80 B=2*B
90 NEXT X
100 PRINT B

When ready, we type in RUN to execute the program.

So Lines 20-90 correspond to the BlooP procedure above. To make a complete program, we start with Line 10, which asks the user to input a value for N. The final output is then printed on Line 100. In BASIC, all lines must be numbered, whereas in BlooP, only the blocks (0 and 1) are numbered.

As double exponentials increase so quickly, this program is only valid when N equals 1, 2, 3, or 4. If we try to input 5 for N, we'll see the following error message:

?OV ERROR IN 80

which stands for "overflow."

As it turns out, exponentiation is primordial in Mocha BASIC. We just use the up-arrow key (the actual arrow key, although it appears as a caret ^ here on the blog). If we want, we can enter in the immediate mode (that is, without coding a full program):

PRINT 2^(3^1)

Exponentiation is left-associative in Mocha, so we must use parentheses. Of course, it doesn't matter much when the exponent is 1, but if we use 2 for N, our answer will be much too low unless we include parentheses:

PRINT 2^(3^2)

Notice that the output here is:

512.000002

instead of the expected 512. This is because for primordial exponentiation, Mocha internally uses logarithms, which aren't exact. The error is more glaring when we try 3 for N:

134217729

which can't be exact, since a power of two can't possibly be odd (except for 2^0 = 1). In fact, the best way to avoid this error is simply to avoid the primordial exponentiation and just use the 10-line program we coded above.

Let's try the minus program next. Since we wish to enter a new program, we type in:

NEW

And here's the program:

10 INPUT M,N
20 IF M<N THEN END
30 FOR X=0 TO M
40 IF X+N=M THEN PRINT X: X=M
50 NEXT X

There are a few things going on here. First, we must input two numbers here, M and N. We can either enter both of them separated by a comma. If we enter only one number, a double ?? appears as the computer waits for the second input.

I wanted to illustrate the difference between QUIT and ABORT, by having the QUIT line end of the program and the ABORT line terminate the loop by setting X to the maximum value of the loop, M

Technically, a BlooP program might contain lines after a QUIT or ABORT that BASIC would need to skip over, so there should actually be GOTO lines in the BASIC version here. I decided not to use these since I don't want our program to be too cluttered.

Also, the BlooP program outputs the default value 0 when M is less than N. For BASIC, I decided not to have Mocha print anything in this situation. The zero is printed only when the actual difference is 0 (that is, when M equals N).

There's nothing like AT MOST in BASIC. Instead, the FOR loop of BASIC has a variable, X, that keeps track of how many times we've gone through the loop. Thus OUTPUT <= OUTPUT + 1; is not needed -- the NEXT X line automatically increments the value of X. And once again, instead of an ABORT line, we end the loop by setting X to the maximum value M.

In most real computer languages, including BASIC, minus is primordial (and there's no rounding error associated with integer subtraction, as opposed to exponentiation) -- and it also returns negative values if necessary. An actual BASIC coder would use the primordial minus, not this above program.

Let's try the prime program next:

NEW
10 INPUT N
20 IF N<2 THEN PRINT "NO": END
30 IF N=2 THEN PRINT "YES": END
40 FOR X=2 TO N-1
50 IF N/X=INT(N/X) THEN PRINT "NO": END
60 NEXT X
70 PRINT "YES"

There are a few changes I made here when converting from BlooP to BASIC. First of all, Hofstadter treated 0 as a special case, but he forgot about 1 as a special case. His program will output "YES" even though 1 is not prime. I fix it here by changing N=0 to N<2 in this program.

But as it turns out, we need to treat 2 as a special case as well. (This is not Hofstadter's error -- the problem here lies with Mocha BASIC, not BlooP.) In BlooP (and many real languages), it's possible for the number of iterations of a loop to be 0. If we input 2 to the BlooP version, the loop will iterate 0 times (that is, 2 minus 2), and so the entire loop is skipped, taking us straight to "YES." But in BASIC, the loop FOR X=1 TO N runs once even if N is zero. The computer doesn't check that 1 is more than 0 until after iterating the loop once. Thus BASIC would run the loop, find that 2 does divide 2 evenly, and then print "NO." We avoid this by including 2 as a special case in Line 30.

(Note: The zero problem also occurs in the double exponentiation problem. If we input 0, BASIC returns 8, even though 2^(3^0) is actually 2.)

The BlooP procedure calls two other procedures, MINUS and REMAINDER -- here Hofstadter wanted to illustrate what it means for procedures to call each other. But this isn't the best example in BASIC to demonstrate this. Instead, we just use primordial minus for subtraction.

Meanwhile REMAINDER isn't primordial in either BlooP or BASIC. Most BASIC coders would implement REMAINDER by using primordial division and INT instead. This gives the integer part of a number. If the quotient equals its own integer part, this means that the fractional part (and hence the remainder of that division) is 0.

OK, let's tackle the Goldbach program:

NEW
10 INPUT M
20 IF M<4 THEN PRINT "NO": END
30 FOR Y=2 TO M/2
40 N=Y
50 GOSUB 130
60 A=C
70 N=M-Y
80 GOSUB 130
90 B=C
100 IF A AND B THEN PRINT "YES": END
110 NEXT Y
120 PRINT "NO": END
130 IF N=2 THEN C=-1: RETURN
140 FOR X=2 TO SQR(N)
150 IF N/X=INT(N/X) THEN C=0: RETURN
160 NEXT X
170 C=-1: RETURN

This time, I do want to demonstrate one procedure calling another -- in this case, the Goldbach procedure calling the primality procedure. This is represented in BASIC by GOSUB, which is similar to GOTO,-- except that instead of END, we RETURN to the main procedure.

In out main procedure, I decided to return "NO" early if the input is less than four, since the smallest possible sum of primes is 4. Then since I want to save some time, the length of the loop is only up to half the input, rather than the input itself. This is because if two numbers have a certain sum, one addend is at most half the sum while the other is at least half the sum. Thus if we're trying to find a Goldbach partition for 98 and make it all the way to 49 without finding a solution, we can be certain that there is no solution.

Now here's the most important about one procedure calling another. Notice that in BlooP, the Goldbach problem has a parameter N and stores values in CELL(0). It calls the primes procedure -- which itself has a parameter N and its own CELL(0). Clearly, these are different -- for example, if we're finding a Goldbach partition for 98, N for Goldbach is 98, while CELL(0) for Goldbach takes on values 2, 3, 4, 5, 6, and so on. During each iteration, Goldbach's CELL(0) becomes Prime's N
while Prime's CELL(0) takes on different values. When Prime's N is 15, its CELL(0) takes on the value 2, and then 3, as it hunts for factors of 15. And all of this happens without changing Goldbach's N from 98 or its CELL(0) from 15.

Hofstadter never explains this in his book, but apparently, BlooP uses variables that are local to the procedures in which they are found. If a procedure has a parameter, then it's a local variable, and any time a CELL appears in a procedure, it's a local variable as well. Thus different procedures can have local variables with the same names without affecting their values in other procedures.

But BASIC uses only global variables. Thus we can't simply use the same N and X (which is what we've been using for CELL(0) in our BASIC programs). Here, I decided to use M and Y in the Goldbach part of the program and then switch to N and X for the primality part.

This explains what Lines 40 and 70 do. They set N to the correct value needed for the primality part, which begins as soon as GOSUB 130 is reached.

The subroutine in Lines 130-170 match Lines 30-70 of the original primes program. We dropped the old Line 10 because we don't want the user to keep entering values for N -- it's the main part of the program that determines what N needs to be. We also dropped the old Line 20 because the special cases 0 and 1 no longer need to be considered -- the new Lines 20-40 guarantee that N never needs to be less than 2. But since N might need to be 2, the special case for 2 remains in Line 130.

The loop in Line 140 has also been shortened -- just as the loop in Line 30 only needs to go up to M/2, the loop in Line 140 only needs to go up to the square root of N. Just as one addend of M must be at most M/2, one factor of N must be at most SQR(N). This significantly saves us time, especially since we may need to determine whether large numbers are prime.

Just as we don't want the user to INPUT values of N for the prime portion, we don't want Mocha to PRINT the words "YES" or "NO" for each factor. So instead, we use C as our OUTPUT value, except that we use special codes that BASIC recognizes -- 0 means "NO" and -1 means "YES." The values -1 and 0 automatically work with IF and AND in Line 100, just as YES and NO automatically work in the BlooP version. These are called Boolean values -- and a procedure that returns a Boolean value is called a "predicate."

By the way, if a number has a Goldbach partition, we see the word "YES" on the screen -- but we don't know exactly what the two prime addends are. But notice that at the time we reach the END, one of the addends is still sitting in Y and the other is in N (since it's the last prime we found). Thus after the program is done, the command PRINT Y,N displays the two prime addends.

OK, so now you understand the entire BASIC version. There are other ways to shorten the run time for this program, including taking advantage of parity (that is, evenness and oddness).

First of all, 4 is the only even number whose prime addends are both even (2). So we can consider 4 to be a special case at the start of the program (automatically say "YES"). And if an odd number has a Goldbach partition, one of the primes must be 2. So we can just test M-2 for primality and be done with it.

For all even numbers greater than 4, both prime addends are odd -- and any factor of a composite odd number is itself odd. So the prime test only needs to test odd divisors, which we do by changing the loop in Line 140 to FOR X=3 TO SQR(N) STEP 2. This time, 3 needs to be the special case in Line 130 instead of 2, since once again, we can't have a FOR loop that repeats zero times.

Another line that can be changed is Line 90 -- just delete it and have 100 say IF A AND C, since C still contains the primality of the second addend. I kept it here because it makes the program easier to understand, but if the focus is on saving time in dealing with large numbers, then we should skip it.

Of course, there's one more way to save significant time. We know that Goldbach's conjecture has been verified for all even numbers up to 10^20 -- and as we already saw by finding 2^(3^4) in the first problem above, numbers in this range are shown approximately in scientific notation. Thus for all even numbers that can be expressed exactly in Mocha (and hence are the sums of numbers for which primality can be determined), Goldbach's conjecture is true. So just have the computer print "YES" if the number is even (and at least 4).

That would be cheating, of course. And besides -- if we want to find the specific addends which give the even number, we should just run the program and print the values of Y and N afterward.

The fact that BlooP has local variables makes it more powerful than BASIC. It means that procedures can be black boxes to their users -- we don't care how they work, but only that they work.

Indeed, the primes program in BlooP uses a procedure called REMAINDER, but Hofstadter never explains how it works. Thus to us, it is a black box. Now if BlooP didn't have local variables, we might suddenly be concerned -- "Is either parameter named N, which might clash with the parameter
named N in PRIME? Does REMAINDER use CELL(0), which is also needed in PRIME?" Since the variables are local, we don't have to worry that the remainder procedure uses N or CELL(0), because they'd be distinct from the variables in the primality procedure anyway.

But since all variables in BASIC are global, it makes a huge difference exactly which variables are used in which procedure. There can be no black boxes in BASIC. This is why BASIC is considered inferior to BlooP and other languages.

The Logo Language

The second computer language that I learned was Logo. This is what it meant to grow up in the 1980's -- BASIC and Logo were prevalent during that decade.

And while BASIC dominated home computers, Logo dominated school computers. As I've said before on the blog, my second grade teacher -- who was also my fifth grade teacher -- was quite strong at teaching math. But computers were also her specialty. During the Path Plan, the elective she taught was a computer class where we learned about Logo. It was held in the computer lab -- where most school computers were in the 1980's and early 1990's.

But there were opportunities to learn Logo well before Path Plan began. When I had this teacher in second grade, it was actually a Grades 2-3 combination class. Two of the other second and third grade teachers were so impressed by her knowledge of Logo that they asked her to take their classes to the computer lab as well. In return, those other teachers took our class -- one of them would teach us art, while the other taught vocal music. (The art teacher would go on to be my third grade teacher, and she also gave me some piano lessons on the side.)

While most of my classes began Logo in the second grade, I actually had the opportunity to use it in the first grade. Because I was so far above grade level in math, my first grade teacher let me take a special computer class in another room once a week. No, it wasn't my future second grade teacher (although she would give me some special reading lessons at another time), and no, it wasn't in the computer lab either. There was one classroom across the hall that actually had a few computers, and they were installed with another version of Logo, called Logo Writer. And these computers were in color, as opposed to the black/white (actually black/green) computers in the lab. The other students in the class were in Grades 5-6, thus marking the largest grade level difference the other students and me in a class I took.

And so even though I call BASIC my first language and Logo my second, I probably learned the nuts and bolts of both languages at the same time, around the first grade.

We usually associate Logo with turtle graphics -- and naturally, when we studied Logo in elementary school, it was nothing but turtle graphics. It wasn't until I stumbled upon Brian Harvey's website that I learned that Logo was a full-fledged language that was about more than just a turtle:

https://people.eecs.berkeley.edu/~bh/logo.html

And it's because of this that we can actually translate Hofstadter's BlooP programs into Logo. By the way, recall that Logo's turtle graphics are also mentioned in Lesson 13-8 of the U of Chicago text -- and I linked to a Logo interpreter for Lesson 13-8 last year:

https://www.calormen.com/jslogo/

Let's start with our double exponentiation:

to twotothethreetothe :n
local "cell0
make "cell0 1
for [i 1 :n] [make "cell0 3*:cell0]
local "cell1
make "cell1 1
for [i 1 :cell0] [make "cell1 2*:cell1]
output :cell1
end
print twotothethreetothe 2

This gives the correct value 512 as expected. It also works for 1, 3, 4, 5, and 6 -- two values more than BASIC can. Meanwhile, it still gives the wrong value for 0 -- and this time, it's even worse than BASIC, as Logo gives 512.

We notice that both Logo and BlooP have use output for, well, the output. Also, we observe that Logo, like BlooP, has local variables, since we actually see the word local in the code! I decided to use the names cell0 and cell1 for the variables, though I could have given them other names.

In the for loop, we see the command make "cell0 3*:cell0, with different punctuation to mean "make (the new variable) cell0 equal to 3 times the (old value of) cell0." 

We move on to the next example, minus. Click "clear" in the lower-right corner and then enter in:

to subtract :m :n
if :m<:n [output 0]
for [i 0 :m] [if :i+:n = :m [output :i]]
end
print subtract 7 2

Notice that like BASIC, the Logo for loop is associated with a variable, in this case i, that keeps track of how many times we've gone through the loop. We don't use output until we're ready to output -- and it doesn't have a default value, so we must explicitly output 0.

I changed the name of this procedure to subtract. That's because minus is already primordial in Logo, although it refers to unary minus (to enter negative numbers). We can indicate primordial subtraction just as you'd expect, so print 7-2 outputs 5. Oh, and Logo coders don't use the word "primordial" -- the preferred word is primitive.

Our next example is the primes predicate. This time, let's follow Hofstadter and have prime invoke subtract -- to do this, we'll delete that last line print subtract 7 2 and insert the following text in its place:

to prime? :n
if :n<2 [output false]
if :n=2 [output true]
for [i 2 subtract :n 1] [if (remainder :n :i) = 0 [output false]]
output true
end
print prime? 7

All of the variables are local, so we don't need to worry about both subtract and prime containing the same parameters n and iFortunately, remainder is primordial/primitive in Logo, so we don't need to write it.

Meanwhile, the official Boolean values in Logo are true and false. These are equivalent to the numbers 1 and 0. (BASIC is the only language I know where true is -1 instead of 1.) Whenever we try to print such a value, it prints as 1 or 0 (so 1 means "prime," 0 means "composite"). But we have to write it this way in order to feed the Boolean output to the Goldbach procedure.

(This also marks a difference between Brian Harvey's "Berkeley Logo" and the Calormen interpreter linked above. In Berkeley Logo, "true and "false require quotes and are printed as the actual words "true" and "false.")

Speaking of which, let's write our Goldbach example:

to goldbach? :n
if :n<4 [output false]
for [i 2 :n] [if and prime? :i prime? subtract :n :i [output true]]
output false
end
print goldbach? 30

This looks almost like the BlooP example. The main difference is that in Logo, and occurs before the following two predicates, not in between as in BlooP or BASIC.

The Pascal Language

For most of my childhood, BASIC and Logo were the only languages I knew. BASIC was my main language at home, and Logo was the dominant language at school.

At some point during my junior or senior year in high school, the AP Computer Science exam, which had been based on Pascal, changed to to C++. Although I never took AP Comp Sci, I did notice that the old Pascal texts were given away in my school for free.

And so Pascal was the third language that I learned. And even more so for Pascal than some other languages that I would learn later on, I had no opportunity to write any code on an actual computer, so all I know about Pascal is theoretical.

Even though Brian Harvey's website is mostly about Logo, he does have a Pascal page -- indeed, here he compares Pascal to his preferred language:

https://people.eecs.berkeley.edu/~bh/v3ch4/langd.html

Much of Hofstadter's BlooP reminds me of Pascal. Both languages have blocks that the user can both begin and end. Both languages have procedures -- except that the procedures that Hofstadter writes in BlooP are more accurately called functions in Pascal. Like algebraic functions, Pascal functions have an input and an output.

In both BlooP and Pascal, every line ends with a semicolon, and there's a period at the very end. But the Pascal period only comes at the end of the whole program -- each function ends with a semicolon.

Let's try our double exponentiation function in Pascal:

function twotothethreetothe(n:integer): integer;
  var cell0, cell1, i: integer;
  begin
    cell0 := 1;
    for i := 1 to n do
      cell0 := 3 * cell0;
    cell1 := 1;
    for i := 1 to cell0 do
      cell1 := 2 * cell1;
    twotothethreetothe := cell1
  end;

Now that I've written it out, we can see a few more differences between BlooP and Pascal. Yes, both languages have begin and end blocks, but if a for loop contains only a single statement, there's no need to begin and end the block. We also notice that instead of output like BlooP or Logo, the name of the function itself is assigned the output.

If we wanted to include this in a full Pascal program, it would look like this:

program calculate;

function twotothethreetothe(n:integer): integer;
  var cell0, cell1, i: integer;
  begin
    cell0 := 1;
    for i := 1 to n do
      cell0 := 3 * cell0;
    cell1 := 1;
    for i := 1 to cell0 do
      cell1 := 2 * cell1;
    twotothethreetothe := cell1
  end;

begin
  writeln(twotothethreetothe(2))
end.

And there's that period, because that's the end of the program. By the way, a Pascal program must go through a compiler before it is executed. The compiler converts the program into machine language, and if the program contains errors, it won't compile.

Here I just stuck to letting n be 2, because I'm not quite sure how high n can be. All variables here are considered to be of type integer, but I don't know how high integers can be. Other languages distinguish between "short" and "long" integer variables. For short variables, 2^(3^2) is the highest value this function can output, but for "long" variables, 2^(3^3) is the maximum. (The value 2^(3^4) that BASIC gave us is often called a "float" in other languages, while Logo's 2^(3^6) corresponds to the "double.")

Let's try out other BlooP programs in Pascal. We proceed with minus -- here, we'll only write the functions and avoid writing a driver program:

function minus(m,n:integer): integer
  var i: integer;
  begin
  if m < n then
    minus := 0
  else
    for i := 0 to m do
      if i+n = m then
        minus := i
  end;

Unlike Logo, simply assigning to the name of the function doesn't end the function (that is, if I recall Pascal correctly). In this sense, Pascal is more like BlooP, where in Hofstadter's version of minus, he assigns to output without the procedure ending. Instead, he used quit and abort -- which may or may not exist in Pascal. (I suspect one of them might exist, but I don't remember it.) I definitely know that there's nothing like at most in Pascal.

For the first if, I was able to add an else section so that the for loop only executes when the condition m < n is false, so that simulates the BlooP quit. But adding a second else to the inner if can't abort the loop the way we can in BlooP, so the loop continues running.

Pascal coders who want to have a loop end early wouldn't use a for loop, which must be executed a predetermined number of times (if they are executed at all). Instead, they might use a while loop:

function minus(m,n:integer): integer
  var i: integer;
  begin
  i := 1;
  minus := 0;
  if m > n then
    while (minus := 0) do
      if i+n = m then
        minus := i
      else
         i := i+1
  end;

This program avoids iterating the loop more times than necessary. The loop doesn't even start unless we have m > n. And it stops as soon as minus is assigned a nonzero value. (You might ask, what would happen to the loop if the answer of the subtraction really is zero? Well, that can't happen unless m = n -- and we don't even enter the loop unless m > n.)

Of course, like most languages, Pascal has a primitive subtraction operation, so this minus function isn't necessary.

Let's move on to prime? This returns true or false, and so we use Pascal's Boolean type. But since we can't include question marks in function names, it's traditional to use the word "is" at the start of the function name:

function isprime(n:integer): boolean
  var i: integer
  begin
    i := 2
    isprime := true
    if n > 1 then
      while ((i < n) and isprime) do
        if n mod i = 0 then
          isprime := false
        else
          i := i+1
    else
    isprime := false {n is 0 or 1}
   end;

Remainder is primitive in Pascal, where it is called mod.

I forget whether a for loop can be executed 0 times in Pascal. (We know that it's executed at least once in BASIC and Logo.) But I definitely know that a while loop can be executed 0 times -- and this is exactly what happens when n is 2. The opening while fails as 2 is not less than 2 -- this means that isprime correctly remains true, so we no longer need to treat 2 as a special case.

Of course, 0 and 1 remain special cases. The final else handles 0 and 1 -- here I added a comment in braces (that's ignored by the Pascal compiler) to make it clearer to the readers.

Let's finish up with our Goldbach function:

function isgoldbach(n:integer): boolean
  var i: integer
  begin
    i := 2
    isgoldbach := false
    while ((i < n) and (not isgoldbach)) do
      if isprime(i) and isprime(minus(n,i)) then
        isgoldbach := true
      else
        i := i+1
  end;

It's also possible to write this program using another type of loop -- a repeat loop, which always executes at least once, and it keeps looping until the condition is true. (Keep that straight -- if it's a while loop, it loops when it's true, and if it's repeat, it loops when it's false.) Whenever I see the word while followed by not, it looks better as a repeat loop:

function isgoldbach(n:integer): boolean
  var i: integer
  begin
    i := 2
    isgoldbach := false
    repeat
      if isprime(i) and isprime(minus(n,i)) then
        isgoldbach := true
      else
        i := i+1
    until((i > n) or isgoldbach)
  end;

Of course, since this repeat loop executes at least once, we must make sure that it doesn't fail when n is a tiny number. For example, when n is 2, the single loop checks whether 2 and 0 are prime. Since 0 isn't prime, the test fails and isgoldbach remains false, as it should. Thus in this program, the single iteration doesn't cause isgoldbach to return incorrect values. (This is different from my BASIC version, where I treated anything less than 4 as a special case in order to avoid 0 and 1.)

The TI-BASIC Language

At around the same time I learned about Pascal, I acquired my first TI graphing calculator, a TI-82. I noticed that the calculator had a PRGM (program) key, and so coding is possible here as well.

The language of the TI calculator is often called TI-BASIC. Despite this name, it has only some similarities with BASIC. Indeed, its lines aren't numbered, and it has while loops that BASIC lacks.

The main thing TI-BASIC does have in common with its namesake is that all variables are global. It's because of this limitation that our TI programs will have to be similar to out BASIC programs as far as naming variables are concerned.

Let's start once again with double exponentiation:

PROGRAM:TWOTO3TO
:Input N
:1->A
:For(X,1,N)
:3A->A
:End
:1->B
:For(X,1,A)
:2B->B
:End
:B

This program works for N up to 5 -- 6 causes an overflow. Oh, and For loops can be executed 0 times on the TI, and so entering 0 for N gives the correct value 2.

Here is our minus program:

PROGRAM:MINUS
:Input M
:Input N
:If M<N
:Then
:Disp 0
:Stop
:End
:For(X,0,M)
:If X+N=M
:Then
:Disp X
:Stop
:End
:End

For this program, I represent BlooP's QUIT and ABORT by using Stop. But keep in mind that TI's Stop ends the entire program, not just a block or loop.

Meanwhile, TI's Then is a bit more like Pascal's then begin. If the then block contains only a single statement, then we don't need to use the word then. For example, suppose I decided that whenever M<N I don't print anything at all, so the Disp 0 line isn't needed, only Stop. Then we can leave out both the Then line before it and the End line after it.

On the other hand, For always requires End even if only a single line needs to be looped. There are two Ends at the end of this program because both For and Then need an End.

Let's tackle the prime program next:

PROGRAM:ISPRIME
:Input N
:If N<2
:Then
:Disp 0
:Stop
:End
:For(X,2,N-1)
:If N/X=int(N/X)
:Then
:Disp 0
:Stop
:End
:End
:1

Since TI-BASIC doesn't have a Remainder primitive, we use Int just as we did in BASIC. And since TI's For loop can run 0 times, we don't need to treat 2 as a special case.

And now we try the trickiest one of all, the Goldbach program. Here is where TI is just as limited as BASIC was earlier -- all variables are global, and we can't have Goldbach call ISPRIME. (Actually, the TI does let us call a program from another program, but I won't show it here.)

So we'll do something similar to BASIC where we just use some of the same lines from ISPRIME in the Goldbach program. But then there's another problem -- ISPRIME contains Stop lines, but these stop lines would halt the entire Goldbach problem when we'd just like to halt the current loop.

Fortunately, TI-BASIC has While and Repeat loops just like Pascal. And since our Pascal version of Goldbach used REPEAT, we'll use that in our TI version as well:

PROGRAM:GOLDBACH
:Input M
:If M<4
:Then
:Disp 0
:Stop
:End
:If M<7
:Then
:Disp 1
:Stop
:End
:2->Y
:Repeat (Y>M/2) or G
:1->G
:For(N,Y,M-Y,M-2Y)
:For(X,2,v(N))         (Note to readers: that v is actually a square root symbol!)
:If N/X=int(N/X)
:0->G
:End
:End
:Y+1->Y
:End
:G

In this program, I treat M less than 4 as a special case (just as we did in BASIC), so that we don't need to consider values of 0 or 1 in the primality part of the code.

The variable G contains 0 if no Goldbach partition has been found, and 1 if it has. The Repeat loop stops as soon as either G becomes 1 or Y exceeds M/2.

Inside the Repeat loop is a strange-looking For(N,Y,M-Y,M-2Y) loop. The reason for this loop is that for Goldbach, we need to test that both Y and M-Y are prime. Rather than copy the same code twice for both primality tests, we just use the For loop. The step size of this For loop is M-2Y, which is the distance from Y to M-Y. We only need a single value of G despite their being two primality tests -- if Y isn't prime, G is set to 0, and remains at 0 heedless of the primality of M-2Y. So only if both primality tests pass can G remain at 1 and a successful partition be reported.

Since we're not printing the results of the primality tests, If contains only a single statement, and so it needs neither Then nor End. The first two Ends near the end of the program close the For loops, and the last End closes the Repeat.

But there are still two problems here. First of all, if Y isn't prime, then the TI still wastes time finding whether M-Y is prime anyway.

And more importantly, if Y=M/2, then Y equals M-Y and the step size for For is 0. This causes an immediate error -- the step size can't be zero, even if the start and stop values are equal.

To solve this problem, I decided to treat 4 and 6 as special cases as well, since their Goldbach partitions are 2+2 and 3+3 respectively. But this is admittedly an inelegant kludge -- it assumes that if an even number greater than 6 has a Goldbach partition, then it has one where the two addends are different (such as 10, which is 3+7 as well as 5+5), so that Repeat ends before reaching Y=M/2. (If M is odd, then M/2 isn't a whole number, so there's nothing to worry about.)

A better way to avoid this kludge is to replace For with another Repeat loop:

PROGRAM:GOLDBACH
:Input M
:If M<4
:Then
:Disp 0
:Stop
:End
:2->Y
:Repeat (Y>M/2) or G
:1->G
:Y->N
:Repeat (N=Y) or not(G)
:For(X,2,v(N))
:If N/X=int(N/X)
:0->G
:End
:M-N->N
:End
:Y+1->Y
:End
:G

Here's how the new Repeat loop works -- it starts out with N=Y. Since it's a Repeat loop, it runs at least once. At the end of the first loop, N has been replaced by M-N (that is, M-Y), and so the loop runs a second time. At the end of the second loop, N becomes M-N, which takes it back to Y. This satisfies the Repeat condition N=Y, and so there is no third loop.

This time, not only is there no error when Y=M/2, but as Y now equals M-Y, the Repeat condition is satisfied after the first loop, so we don't waste time testing its primality a second time. And the condition or not(G) is there to avoid the first problem above -- if Y is composite, then we don't waste time testing the primality of M-Y.

Of course, it's also possible to have programs call each other as subroutines. But we'd have to make a few changes to ISPRIME so that it won't keep printing all of the primality tests. The usual way to do this is with a helper program -- the new ISPRIME takes care of input and output only, while the helper program does the actual primality test. Then both ISPRIME and GOLDBACH can call the helper program for their respective primality tests. This would also eliminate the need for the special case M<4 in the Goldbach program.

By the way, some TI users might wonder why Repeat acts so strangely. Not only isn't it like Logo's repeat (which is followed by the number of times we want the loop to repeat), but for some reason, it loops when the condition N=Y is false -- and on the first loop N=Y is true, yet it loops anyway. Both of these occur because TI's Repeat is based on Pascal's, and that's how Pascal's loops work. (The use of the word until at the end of the loop makes Pascal's loop structure more obvious, but TI doesn't have until.)

C++ and Moving Forward with Java

When I arrived at UCLA, I took a PIC 10A class, which used the language C++. C++ is similar to Pascal, except that C++ is what's known as an object-oriented languages. This means that most of the coding involves different types of objects that we create, and methods/functions that act upon those various objects. If I were to post C++ versions of the BlooP programs above, they wouldn't differ significantly from the Pascal versions, since the only objects we'd use are integers and Booleans.

An author named Jesse Liberty has posted a C++ text that's similar to Brian Harvey's Logo text:

http://101.lv/learn/C++/

I learned other languages at UCLA, including Scheme. (Scheme is related to Logo, as both of them are considered dialects of Lisp.) But I mastered none of those as well as C++. I described on the blog the time when I passed the written exam in Scheme, but had trouble writing code that actually made the computer work. I always had trouble keeping track of all the parentheses (since as they say, Lisp stands for "lists in silly parentheses").

Even in C++, I excelled in PIC 10A, but my grades slipped as I proceeded through 10B and 10C. I still recall one project (on text compression) in 10C that I had so much trouble getting it to compile (that is, I kept getting compile-time errors). In the end, I submitted a file that contained mostly stubs (short functions that do nothing, and are there for the sole purpose of getting it to compile). But somehow, my final grade for the project was 8/10 (whereas it would have been 0/10 if my work didn't compile at all). So I passed 10C, but passing a class isn't the same as being a programmer. I'm not officially a programmer or coder unless I can get the computer to work correctly.

And so as I prepare to learn another computer language here on the blog, I want to make sure that this doesn't happen again. I want to learn a language for real -- not just read a text and answer questions, but write code that actually works on the computer. There's no playing around here -- I must master coding well enough to make it a viable job alternative if the coronavirus ultimately ruins my career in the classroom.

This is what I was hoping to do when I purchased that Visual Basic text in the fall. Visual Basic is similar to BASIC, but it removes some of its defects. Based on what I see when I glance through the text, Visual Basic has variables that are local to its subroutines. There are no line numbers, so that unlike original BASIC, Visual Basic isn't reliant on "spaghetti code" with lots of GOTO statements that jump around the program.

And I'd be able to write real Visual Basic code since the book came with a CD that contains software for V. Basic. No longer would my learning be merely theoretical -- it would be real hands-on coding.

But of course, you know what happened -- the CD got stuck in my drive. I can neither run the CD nor remove it from my drive. And so the whole Visual Basic project became a bust. Of course, this means that any computer language that I learn now must be theoretical, which is better than nothing (though I'd still prefer actual software that I can code). But it also means that I'm no longer tied to Visual Basic -- I could choose a better language instead.

I know that Visual Basic is superior to spaghetti BASIC -- the latter died in the 1980's as soon as Visual Basic replaced it in the 1990's. But is Visual Basic a current, vibrant language? The last version, VB6, came out in 2008 -- there hasn't been another sense. Thus it's likely that Visual Basic is now just as dead as spaghetti BASIC is.

We know that Pascal is also past its prime, since C++ replaced it on the AP Comp Sci test (which is why I was able to get that Pascal text for free). But as it turns out, C++ itself only lasted a few years as the official AP computer language. Since right around the time I left UCLA, the language of choice for the AP is -- Java. (Indeed, both Pascal and Java have had much longer reigns as the official AP language than C++ ever did.)

Like C++, Java is an object-oriented language. There's a Java link at the same website as the C++ link from above -- Laura Lemay is listed as the primary author here:

http://101.lv/learn/Java/index.htm

And the latest version of Java came out just last month, Java 14. Between this and the fact that the AP considers it to be a language worth learning, I have made my choice. I will learn to code in Java, and I'll post my progress here on the blog.

Still, I wish I had some Java software to write actual code in. I miss the days when I could just turn on the computer and voila -- BASIC was right there, with no need to purchase or download any additional software. But those days are long gone. If I want to code in Java, I'll need to spend money to get it. Since I just received a stimulus check, it might be worth it to purchase and download.

But are there ways to access Java without spending lots of money? After all, if it's the language for the AP test, then how are high school students -- many of whom likely used on-campus computers as their only way to get to Java before the coronavirus outbreak -- accessing the language?

I found the following Twitter link:

https://twitter.com/hashtag/apcsa?lang=en

But unfortunately, there isn't much there on access to Java for people without it at home. (I do see a lot of Female Diversity Awards there -- it's good that more girls are considering programming as a career now.) I wouldn't be surprised if AP Comp Sci students are given passwords to Google Classroom or some other link to allow them to use Java during the coronavirus -- that's good for them, but it doesn't help people like me at all.

Let me write Hofstadter's BlooP functions in C++ now. Perhaps after I learn Java, I might try writing them in that language as well. Then again, these BlooP programs don't require object-oriented coding as found in C++ or Java.

double twotothethreetothe(int n)
{
     int cell0 = 1;
     for (int i=1; i<=n; i++)
          cell0 *= 3;
     double cell1 = 1.0;
     for (i=1; i<=cell0; i++)
          cell1 *= 2.0;
     return cell1;
}

By using double here, we can go up to n equals 6, just as we did in Logo.

unsigned int minus(int m, int n)
{
     if (m<n)
          return 0;
     for (int i=0; i<=m; i++)
          if (i+n==m)
               return i;
}

Here we used unsigned int to emphasize that this minus function doesn't return negative values (as a real subtraction function would). All variables in BlooP are unsigned, and so the issue of sign never comes up. (Actually, I read that no variables in Java are unsigned, so I'd better get unsigned out of my mind before starting Java.)

bool isprime(int n)
{
     if (n<2)
          return false;
     for (int i=2; i<n; i++)
          if (n%i==0)
               return false;
     return true;
}

By now most of this should be self-explanatory. This function is bool (Boolean), the for loop can be executed 0 times so 2 isn't a special case, and % is the primitive remainder operator.

And finally, here's Goldbach:

bool isgoldbach(int n)
{
     for(int i=2; i<n; i++)
          if (isprime(i) && isprime(minus(n,i)))
               return true;
     return false;
}

Here the && operator denotes "and." Notice how much simpler our Goldbach function is in C++ than it was in TI-BASIC. I hope that Java will be as easy for me to figure out when I start it next week.

A Few Math Problems

I'm burying the Rapoport problem here, so clearly it's not a Geometry problem. Indeed, today's problem is Calculus. (So I guess today is AP day, as we jump from AP Comp Sci to AP Calc).

Today on her Daily Epsilon of Math 2020, Rebecca Rapoport writes:

integral _1 ^5 (x^3 -3x^2 - 3/2)dx

This is a straightforward integration problem. If you don't recall integration, then you can refer back to my Michael Starbird posts, particularly those on integration (Lessons 12-14, January 22nd-24th).

We begin by finding the antiderivative:

(1/4)x^4 - x^3 - (3/2)x

and then we plug in the limits of integration (5 and 1) and subtract:

(1/4)(5^4 - 1^4) - (5^3 - 1^3) - (3/2)(5 - 1)
= (1/4)(624) - 124 - 6
= 156 - 130
= 26

Therefore the value of the integral is 26 -- and of course, today's date is the 26th.

There was one Geometry problem on the Rapoport calendar recently:

Lines AB and CD are parallel. Find x.

[The two parallel lines are cut by a transversal, but no other points besides A, B, C, D are labeled. I will only label the eight angles -- the two lines are horizontal, and reading out the angles from left to right and top to bottom are 1, 2, 3, 4, 5, 6, 7, 8. Then Angle 2 = 155 and Angle 5 = x.]

OK, so this is easy. By the Corresponding Angles Consequence, Angle 6 = Angle 2 = 155. Then, since Angles 5 and 6 form a Linear Pair, x + 155 = 180, and so x = 25. Therefore the desired angle is 25 degrees -- and of course, yesterday's date was the 25th.

The day before yesterday was Arbor Day (or Tree Day). Arbor Day is the last Friday in April, and so this year it fell on its earliest possible date, April 24th. Thus this is the closest to Earth Day (a similar holiday) that Arbor Day can get.

(Also, notice that if Easter is on its latest possible date, April 25th, then Arbor Day is also on its latest possible date, April 30th. Thus Arbor Day can never be before Easter -- the earliest it can fall relative to the holiday is Easter Friday/Friday of Bright Week. Also, notice that Arbor Day is always one month and one day before Memorial Day. Thus Arbor Day, like Memorial Day, can keep its current definition on the Usher Calendar.)

The reason I'm writing so much about Arbor Day is that Rapoport -- most likely intentionally -- put a tree problem on her calendar that day. Of course, these aren't botanical trees, but the trees that one can find in graph theory. (The Konigsberg Bridge problem of Lesson 1-4 is also on graph theory.)

I won't state the problem here because I don't wish to draw or describe the graph here. Instead, I will mention that there is a Numberphile video on trees, including mention of the number TREE(3), which is a very big number. (Trees can also appear as objects in C++ and presumably Java.)

What About FlooP and GlooP?

According to Hofstadter, FlooP is similar to BlooP, except that there are MU-LOOPs that, unlike the For loops of most languages, can run an arbitrary number of times. He writes:

"BlooP's defining feature was the boundedness of its loops. What if we drop that requirement on loops, and invent a second language, called "FlooP" ('F' for "free")? FlooP will be identical to BlooP except in one respect: we may have loops without ceilings, as well as loops with ceilings (although the only reason one would include a ceiling when writing a loop-statement in FlooP would be for the sake of elegance)."

The author explains why we'd want to do this:

"This feature will allow us to write tests in FlooP for such properties as wondrousness [that is, a number that satisfies the Collatz conjecture] and the Tortoise property [a number that is the difference of two primes] -- tests which we did not know how to program in BlooP because of the potential open-endedness of the searches involved."

Keep in mind the original reason that Hofstadter writes about this -- he's ultimately trying to tie FlooP back to Godel and his incompleteness theorem:

"So each FlooP program now gets a very long Godel number. For instance, the shortest BlooP function (which is also a terminating FlooP program) --"

OK, there's no need for me to start writing out all the Godel numbers.

These new loops are easily simulated in other languages as While loops with no conditions -- such as while(true) -- which don't halt until return is reached inside the loop:

As for GlooP, this should be one step beyond FlooP:

"Yes, but what is GlooP? If FlooP is BlooP unchained, then GlooP must be FlooP unchained. But how can you take the chains off twice? How do you make a language whose power transcends that of FlooP?"

But according to Hofstadter, there can be no improvement over FlooP -- that's as far as we can go:

"It is called the Church-Turing Thesis. If we accept the CT-Thesis, we have to conclude that "GlooP" is a myth -- there are no restrictions to remove on FlooP, no ways to increase its power by 'unshackling' it, as we did BlooP."

And just as "primitive recursive" means "BlooP-computable," the author explains:

"Now FlooP-computable functions can be divided into two realms (1) those which are computable by terminating FlooP programs: these are said to be general recursive; and (2) those which are computable only by nonterminating FlooP programs: these are said to be partial recursive."

Hofstadter concludes the chapter by tying this back to TNT, or Typological Number Theory, which can handle both primitive and general recursive functions and predicates:

"If TNT could not represent primitive of general recursive predicates, then it would be incomplete in an uninteresting way -- so we might as well assume it can, and then show that it is incomplete in an interesting way."

Conclusion

Had my Visual Basic CD not gotten stuck in my drive, my original plan was to get through the early lessons quickly and slow down for the later, more complicated lessons. I was going to spend one post on Lesson 1, two posts on Lesson 2, three posts on Lesson 3, and so on. If we started this right after Presidents' Day (which was my plan), then we would have completed Lesson 6 on Pi Day.

The Java link above has 28 lessons. If we spend one post per lesson, then at my current (summer) posting rate, we should finish right around the first day of school. But I want to go slower to make sure that I'm actually learning. Then again, spending n posts on Lesson n is too slow when I'm not posting every weekday -- we'd probably only get through Lesson 7 by the end of summer.

So instead, I'll spend one post on Lesson 1 (which is just an introduction), and then two posts for every lesson from that point on. This should take us about midway through the book before the first day of school.

And so our first Java lesson will be in my next post.

No comments:

Post a Comment