Tuesday, April 28, 2020

Lemay Lesson 1: An Introduction to Java Programming

Table of Contents

1. Introduction
2. Lesson 1 -- An Introduction to Java Programming
3. Our First Java Application and Applet
4. An Interesting Rapport Problem
5. Reblogging for Today
6. Conclusion
7. EDIT: April 29th Update

Introduction

This is my first post for my big spring/summer project -- learning to code in the Java language. As I explained in my previous post, I'm learning Java as we go along. Everything that I'm posting here is something I'm seeing and learning for the very first time.

Here is a link to the website where I'm learning about Java:

http://101.lv/learn/Java/ch1.htm

Notice that Laura Lemay, the primary author of this website, calls her lessons "Days." The full title of her book is Teach Yourself Java in 21 Days! Thus she's expecting us to read one chapter of her book per day and become Java experts after three weeks. Oh, and there's actually 28 chapters in the book, since there's also a bonus fourth week of lessons.

Well, I could follow the suggested timetable, since I do wish to learn Java quickly. Indeed, I could wait until Friday to start so that the chapters match the dates in May, and then I'd be done with the book by May 21st or 28th. And I can even post my progress here everyday -- so it would be almost like that "post 30 times in May" challenge from four years ago.

But I won't do that for two reasons. First, I don't wish to post that often, especially when I'm posting on a summer schedule of once or twice per week. And second, I want to move slowly to make sure that I'm actually learning the language rather than rushing through the book just to meet an arbitrary 21- or 28-day deadline.

Therefore, I'm posting these as "Lessons," not "Days." I prefer "Lessons" to "Chapters" to emphasize that I'm trying to learn something here. Today we'll work on Lesson 1. Since this lesson is short, I'll spend only one blog post on it, but for most lessons we'll take two posts to complete it.

By the way, if there are any AP Comp Sci teachers or students out there looking for something to read to review Java before the upcoming AP exam, please feel free to use this blog post as a resource, since I'm learning Java along with you. But keep in mind that I'm moving very slowly, so I won't be anywhere near done with the text by the day of the exam.

Meanwhile, today there is a 4 PM board meeting in my new district regarding student grades due to the coronavirus closure. (My old district announced its grading policy two and a half weeks ago.)

Even though that meeting should be complete by the time I click "Publish" on this blog post, there's no way for me to know what decisions have been made in time for me to mention it here. Most likely, the grading policy will be emailed and posted on the district website early tomorrow morning -- but that's too late to make today's blog post.

Perhaps I should have waited until tomorrow to make this post. But I'd already planned on posting today, and I don't wish to delay my study of Java an extra day just so I can post about grades. Still, I am curious about what sorts of grades the students I subbed for will be earning.

Oh, and before we start Java, I want to mention that classic Google Doodles are returning to help entertain us during the virus closures. Yesterday's was the first, and it was on coding! It looks like making a bunny hop, but it's really a thinly disguised version of Logo turtle graphics. (It even mentioned "Logo 2017" in the URL!)

Lesson 1 -- An Introduction to Java Programming

Since this text is written online, there's no need for me to quote large portions of the text (as opposed to our usual side-along reading books). If you want to follow along, you can just click the link to see what I'm talking about.

Lesson 1 of Laura Lemay's Teach Yourself Java in 21 Days! is called "An Introduction to Java Programming," and here's how it begins:

Hello and welcome to Teach Yourself Java in 21 Days! Starting today and for the next few weeks you'll learn all about the Java language and how to use it to create programs that run inside Web pages (called applets) and programs that can run on their own (called applications).

That's the overall goal for the next couple weeks. Today, the goals are somewhat more modest, and you'll learn about the following:
  • What exactly Java is, and its current status
  • Why you should learn Java-its various features and advantages over other programming languages
  • Getting started programming in Java-what you'll need in terms of software and background, as well as some basic terminology
  • How to create your first Java programs-to close this day, you'll create both a simple Java application and a simple Java applet!
Once again, I'm only quoting major lines here. Let's begin with what exactly Java is:

More specifically, Java is an object-oriented programming language developed by Sun Microsystems, a company best known for its high-end UNIX workstations. Modeled after C++, the Java language was designed to be small, simple, and portable across platforms and operating systems, both at the source and at the binary level, which means that Java programs (applets and applications) can run on any machine that has the Java virtual machine installed (you'll learn more about this later).

As I mentioned in my last post, I'm familiar with C++ since I learned it at UCLA. If Java is indeed modeled after C++, then this should help me learn it.

Here's the most important part for me -- what to do about Java software:

Currently, to program in Java, you'll need a Java development environment of some sort for your platform. Sun's JDK works just fine for this purpose and includes tools for compiling and testing Java applets and applications. In addition, a wide variety of excellent Java development environments have been developed, including Sun's own Java Workshop, Symantec's Café, Microsoft's Visual J++ (which is indeed a Java tool, despite its name), and Natural Intelligence's Roaster, with more development tools appearing all the time.

This website is clearly old, as it continually refers to Netscape as a Java-enabled browser to view Java applets in. Still, the important thing is that I need a Java development environment.

Here I learn that compiling Java code is different from compiling C++ code:

Things are different when you write code in Java. The Java development environment actually has two parts: a Java compiler and a Java interpreter. The Java compiler takes your Java program and, instead of generating machine codes from your source files, it generates bytecodes. Bytecodes are instructions that look a lot like machine code, but are not specific to any one processor.

And as for the interpreters, I read:

In the future, however, the Java bytecode interpreter will most likely come with every new operating system-buy a Windows machine, and you'll get Java for free.

Well, this is definitely the future (nearly a quarter-century later), and I have a Windows machine. But I definitely do not have a Java interpreter on my computer. This must have been just wishful thinking on Lemay's part.

Meanwhile, here's a comparison between Java and the language I'm familiar with, C++:

Java is modeled after C and C++, and much of the syntax and object-oriented structure is borrowed from the latter. If you are familiar with C++, learning Java will be particularly easy for you because you have most of the foundation already. (In fact, you may find yourself skipping through the first week of this book fairly rapidly. Go ahead; I won't mind.)

Note: I'm certainly not skipping through the first week rapidly. I want to make sure that I'm learning Java thoroughly.

At this point, the author tells us how to install Java on our system. I will skip over this part, since this won't matter until I actually decide to download it. But note -- this might be sooner than later, since I do wish to get down and write some actual code.

Our First Java Application and Applet

In this lesson, there are two programs -- one an application, the other an applet. Let me post both of those right here.

First is the application:

Listing 1.1. Your first Java application.
1: class HelloWorld {
2:     public static void main (String args[]) {
3:         System.out.println("Hello World!");
4:     }
5: }

Lemay warns us that the line numbers aren't part of the program -- they're included so that she can refer to the lines more easily in the book. This isn't BASIC where every line needs to be numbered.

It is traditional for coding texts to begin with the "Hello World" program. Indeed, I remember subbing in a C++ coding class one day, and sure enough, the students were working on their own "Hello World" program.

Lemay doesn't explain how exactly this program works -- most likely, she'll tell us all about the individual words and commands in subsequent chapters. So let me try to figure out some of the words myself, based on what I already know about C++ and other programming languages,

In Line 2, the most important word is main. Every complete program in C++ has a main function, and it appears that the same is true for Java applications. The main function in C++ and Java corresponds to the program in Pascal.

The word void appears just before main. OK, there are void functions in C++ as well. It means that this function does not return a value (it might print something, but that's not the same as the return value of the function). All procedures in BlooP return a value, but this isn't true for most other coding languages. In fact, a Pascal procedure doesn't return a value, while a function does.

But in our Java program, there are two more words before main -- public and static. I've seen these words in C++ before, but in more complex programs, not the simple "Hello World" program.

Oh, and there's one more word in Line 2 to analyze -- String args[]. Actually, there's something similar in C++, called argc and argv. It refers to parameters that we would put on the command line when starting the program, in case we need to give main some extra information. In C++, the "Hello World" program doesn't need argc or argv, but I assume that in Java, we have to mention the args parameter whether we need it or not.

That takes care of Line 2. Line 3 is obviously where we actually print "Hello World!" I notice that the command that actually does the printing is println. This reminds me of print in both BASIC and Logo. And those extra two letters ln appear in Pascal, where we see writeln. Those two letters stand for line, so I assume that in Java, println prints the message on its own line -- without ln, the next message (if there is one) would print on the same line.

Before println, I see the word out. This reminds me of cout in C++ -- it refers to the output "stream" that actually prints the message. I also see the word System there, which has no equivalent in C++ or any other language I know. I suppose this means that it's the "system" that's sending the message to the output stream to be printed.

Oh, and there's one more thing I notice about this short program. HelloWorld isn't the name of the program, but the name of a class in Line 1. In C++, classes refer to objects -- indeed, that's where the name "object-oriented programming" comes from. But the "Hello World" program in C++ doesn't require any classes -- and in C++, main isn't the member of any class. Apparently in Java, even the main function must be the member of a class. Oh, and that also explains why main is public -- in C++, only members of classes are public. Since main is a member of a class in Java, we're able to declare it public.

Now let's look at the first applet that Lemay writes for us:


Listing 1.2. The Hello World applet.

1: import java.awt.Graphics;
2: 
3: public class HelloWorldApplet extends java.applet.Applet {
4: 
5:     public void paint(Graphics g) {
6:        g.drawString("Hello world!", 5, 25);
7:    }
8:}

In Line 3, we see that HelloWorldApplet is the name. I see  java.applet.Applet at the end of this line, which indicates that it's a Java applet. Unlike the previous class, this class is itself declared as public -- probably because applets appear on the Internet, so of course we expect the "public" to be able to see it.

In Line 5, this class has one member function, but I notice that it isn't main. This likely means that unlike applications, applets don't need to have a main function. The function is called paint, and it has one parameter, a Graphics object. So this applet has something to do with graphics.

In Line 6, the function drawString (a member of Graphics) is called. This makes sense, as "graphics" means "drawing something on a screen." The object to be drawn is a string -- likely the string "Hello world!" that's listed as the first parameter. I'm not sure what the other parameters, 5 and 25, are for. Well, drawings have sizes, so maybe 5 and 25 are the length and width (or other dimensions) of our drawing. Or maybe they stand for color numbers, since drawings have colors (but I can't be sure, since the link on the Lemay page to what the applet looks like is in black and white).

That leaves us with Line 1, and for this I must think back to C++ again. To import something means to bring it in, or "include" it in our program. C++ programs often begin with #include, as in the line #include <iostream.h>. The file being included is java.awt.Graphics -- I don't know what awt is, but I know what java and Graphics are. Thus by importing this file, we're saying that this applet will include Java graphics.

And that's completes our first lesson. Some readers might wonder, is that it? What's the point of breaking down the two Hello World programs, when it would be easier for the readers just to click on Lemay's website and learn Java directly from there, leaving me out? And if Lemay doesn't explain what all the parts of the program means, it's because she doesn't want us to know yet. She'll explain all the parts of the program in due time.

Well, here I am the one who's learning Java. And since I haven't installed Java yet, the one thing I can do to help me learn Java is compare it to C++ and the other languages I know. This is all part of my personal learning process.

No, I don't need to post a learning process on the blog. But blogging all of this helps keep me accountable to myself -- if I say I'm going to blog something, then I have to blog it, or otherwise there are potential readers out there who'll wonder why I'm not blogging it. If you don't wish to read about my Java learning process, then you don't have to read it. You can skip all the Java sections of my posts and read the rest.

But by posting my own process in learning something new such as Java, it also illustrates to me what a learning process looks like. If I struggle to learn Java, it will help me appreciate what happens when my students struggle to learn math in my math classes. Recall that Java is still just Plan B -- Plan A is to return to the classroom as a math teacher. If Plan A succeeds, I don't need a Plan B -- but writing out the steps to Plan B (my learning process in Java) can ultimately help me in Plan A (the kids' learning process in math).

An Interesting Rapport Problem

I'm burying the Rapoport problem here, so clearly it's not a Geometry problem.

Today on her Daily Epsilon of Math 2020, Rebecca Rapoport writes:

Let [e_0; e_1, e_2, ...] be the infinite simplified continued fraction representation of the mathematical constant e. What is e_41?

I've written about continued fractions on the blog before, going back to the Ogilvy book. Let me reblog a little of what I wrote about them in my final 2018 post:

At this point, Ogilvy writes about the main purpose of continued fractions. It's not to find fractions to approximate other fractions, but rather to find fractions to approximate irrational numbers.

His first example is sqrt(2). He begins by adding 1 -- that is, floor(sqrt(2)) -- and then subtract it back:

sqrt(2) = 1 + sqrt(2) - 1

We had good luck inverting before, so we try it again:

sqrt(2) = 1 + 1/(1/(sqrt(2) - 1))

Now we perform the Algebra II trick -- rationalize the last denominator by multiplying it and the numerator by its conjugate:

sqrt(2) = 1 + 1/((sqrt(2) + 1)/((sqrt(2) - 1)(sqrt(2) + 1)))
            = 1 + 1/((sqrt(2) + 1)/(2 - 1))
            = 1 + 1/(1 + sqrt(2))

At this point, we have that sqrt(2) equals something on the right hand side -- and that something itself contains a sqrt(2). So we substitute the entire RHS in for sqrt(2):

sqrt(2) = 1 + 1/(1 + (1 + 1/(1 + sqrt(2))))
            = 1 + 1/(2 + 1/(1 + sqrt(2)))

And the RHS still has a sqrt(2), so we substitute in RHS again and again ad infinitum:

sqrt(2) = 1 + 1/(2 + 1/(2 + 1/(2 + 1/(2 + ...

Just as rational numbers have finite continued fraction expansions, it turns out that irrational numbers have infinite continued fraction expansions.

Returning to 2020, I notice that Ogilvy discusses continued fractions for several famous irrational numbers, including e. But instead of just looking it up, let's generate it on the TI calculator. We begin by entering the number -- let's just try sqrt(2) again to be sure that it works. Since the integer part is 1, we already know to start the continued fraction with a 1

1.414213562 [1;

Now we type in the following sequence:

fPart(Ans)^-1

and keep pressing the ENTER key over and over. Each time we get an answer, we append its integer part to our continued fraction:

2.414213562 [1; 2,
2.414213562 [1; 2, 2,
2.414213562 [1; 2, 2, 2,
2.414213562 [1; 2, 2, 2, 2,

and so on. So our continued fraction is [1; 2, 2, 2, 2, ...] -- which exactly is what we obtained earlier.

The sequence won't always become constant as it did above, but many irrational numbers do produce certain patterns in their continued fractions. Let's try sqrt(3) and pi first, before finally tackling e:

sqrt(3)
1.732050808 [1;
1.366025404 [1; 1,
2.732050808 [1; 1, 2,
1.366025404 [1; 1, 2, 1,
2.732050808 [1; 1, 2, 1, 2,
1.366025404 [1; 1, 2, 1, 2, 1,
2.732050808 [1; 1, 2, 1, 2, 1, 2,
1.366025404 [1; 1, 2, 1, 2, 1, 2, 1,
2.732050808 [1; 1, 2, 1, 2, 1, 2, 1, 2,

And so sqrt(3) = [1; 1, 2, 1, 2, 1, 2, 1, 2, ...]. Since our continued fractions are based on fractional parts (fPart), as soon as two numbers in the sequence have the same fractional part, the sequence must completely repeat.

pi
3.141592654 [3;
7.062513306 [3; 7,
15.99659441 [3; 7, 15,
1.003417231 [3; 7, 15, 1,
292.6345986 [3; 7, 15, 1, 292,
1.575799191 [3, 7, 15, 1, 292, 1,
1.736716576 [3; 7, 15, 1, 292, 1, 1,
1.3573741     [3; 7, 15, 1, 292, 1, 1, 1,
2.798188227 [3; 7, 15, 1, 292, 1, 1, 1, 2,

There's no obvious pattern here, especially since that 292 sticks out like a sore thumb. We're hoping that e will produce something promising:

e
2.718281828 [2;
1.392211191 [2; 1,
2.549646778 [2; 1, 2,
1.819350244 [2; 1, 2, 1
1.220479286 [2; 1, 2, 1, 1,
4.535573476 [2; 1, 2, 1, 1, 4,
1.867157439 [2; 1, 2, 1, 1, 4, 1,
1.153193129 [2; 1, 2, 1, 1, 4, 1, 1,
6.527707921 [2; 1, 2, 1, 1, 4, 1, 1, 6,
1.894987663 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1,
1.117333838 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1,
8.522690623 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8,
1.913177616 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1,
1.095077214 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1,
10.51776712 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10,
1.931370227 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1,
1.073686888 [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1,

And there we have it, a clear pattern. Excluding the initial e_0 = 2, all of the coefficients are 1 except for every third value:

e_2 = 2
e_5 = 4
e_8 = 6
e_11 = 8
e_14 = 10

We see that each of these indices is one less than a multiple of 3 -- if we add that one back and then take 2/3 of it, the answer is the correct value in the continued fraction.

Adding one to 41 gives 42, and then two-thirds of 42 is 28. Therefore e_41 = 28 -- and of course, today's date is the 28th.

Reblogging for Today


Each day during the coronavirus closure, I've been reblogging an old post based on the date. And since today is April 28th, I'll go back to an old post from that date.

Four years ago today, I was officially hired to work at the old charter school. The letter I received notifying me of this was dated April 28th, 2016. But I didn't check my email until after I completed my April 28th post, so I didn't mention my new job on the blog until my April 29th post that year.

The last time that April 28th wasn't on the weekend (and hence was a posting day) was three years ago, just after I left the old charter school. That day, I posted an old trig lesson that I'd found from another teacher, Micaela AKA "Alternative Math":

Today's idea comes from an anonymous Washington State teacher who only goes by the username "Alternative Math" -- named for the alternative high school to which this teacher is assigned.

https://alternativemath.wordpress.com/2016/02/05/geometry-constraints-and-trig/

Here is the original post:

I have been busy planning out a unit on trigonometric ratios for my Geometry B course. I have been trying to balance the open ended exploration and project based learning that I prefer with the more typical questions that students will eventually see on state tests or future math classes.
Here is the [Common Core] standard I’m addressing with this lesson: G-MG.3 Apply geometric methods to solve design problems (with a focus on constraints).
I introduce trig with the slope ratio, proportions, and physically measuring before I ever tell them the word tangent. I’m leaning toward using [High Priestess] Kate Nowak’s Introduction to Trig and then running a few Labs where calculate heights and distances of physical things outside before offering this [worksheet].
Afterwards I might show a few ramp fails before giving them a more open ended design problem. I’m still working on the actual formatting piece, but it will be a blueprint showing a door/stoop 5 ft high, but due to size of parking lot also has a restriction on length. Students will figure out it is not possible to use one ramp in that space and will have to figure out how to use two or more ramps to fit the constraints.
Nothing too mind blowing or exciting here, but I figure it gets at what I’m hoping they understand.
Notice that this teacher attributes this activity to yet another teacher -- New Yorker Kate Nowak. Even though I myself found this activity on the Alternative Math page, in today's post I will credit Kate Nowak as the originator of the idea. This is due to the anonymity of the Alternative Math website -- it's far easier for me to write "Nowak" than "the author of Alt Math," and it's easier to write "she" (referring to Nowak) than to write "he or she" over and over. We already know who Nowak is -- I mentioned her blog that same week and explained why she's known as the "High Priestess."

This is what Nowak writes about this activity on her own website:

The children understand that sin, cos, and tan are side ratios. The children! They understand! They are not making ridiculous mistakes, and they can answer deeper understanding questions like, "Explain why sin(11) = cos(79)." I think right triangle trig is a frequent victim of the "First ya do this, then ya do this" treatment -- where kids can solve problems but have no idea what is going on. There's often not a ton of time for it, and it responds well to memorized procedures (in the short term). So, if your Day One of right triangle trig involves defining sine, cosine, and tangent, read on! I have a better way, and it doesn't take any longer.

I don't need to keep posting the rest of this project. The main idea from both Micaela and Nowak is that students are often turned off when they hear words like "trigonometry," "cosine," and so on -- and after the unit is over, they forget what trig is, and think of "cosine" as a calculator button and nothing more. The project, therefore, is to get students thinking about the concepts without getting hung up with the vocabulary. How many non-mathematicians, decades after they took trig, remember even that trig has something to do with triangles?

This is the intent of my "threenooklore" (or is it "threesidelore"?) project that I announced on the blog a few posts back. It seeks to replace complicated words like "trig" (or "trigonometry") with hopefully simpler words that help students remember the concepts. It goes hand-in-hand with Nowak/Micaela's triangle activity -- and I'll start this project at some point this summer.

Conclusion

Today, California Governor Gavin Newsom mentioned the possibility that the school year might start in July, if the coronavirus is sufficiently under control by then. This would allow students to make up some of what they missed during the virus closure.

Normally, as a sub, I wouldn't want to be anywhere near a classroom in July (unless it's as a summer school teacher, as I tried to sign up for two years ago). But when that stimulus money starts to run out and I have an empty bank account in July, I'll be itching to return to the classroom then.

I'm still deciding what my blogging schedule will look like if schools in either of my two districts open in July. For the most part, I'll cross that bridge when I get to it. I will start some of my summer projects (threenooklore) a little earlier if I hear that either of my districts plans on reopening early. (It will have meant that "summer break" is running from April-June this year.)

Of more immediate concern is, will I purchase and download any software with the money I earned in the stimulus? I'm considering both Zoom (for a possible teaching career) and Java (for a possible non-teaching career), and I might make a decision in time for my next post.

EDIT: April 29th Update

No, the main reason I'm editing this post on April 29th because I wanted to post the grading system in my new district -- though I will post it anyway. In my new district, elementary students will not get letter grades at all. In middle and high schools, the grades will be given as "credit/incomplete," but students who want  A's, B's, or C's can still opt in to letter grades. (Does this mean that D grades now count as incomplete?) Grades can be no lower than what they were at the time the schools closed.

But actually, I edit this not to write about A's, B's, and C's, but about C's, D's, E's, and G's. Yes, that's right -- I already mentioned the return of the Google Doodle for coding in Logo, and today the Fischinger player is back!

Unfortunately, there's a problem -- the Fischinger player is running very slowly on my computer, even when compared to the other Doodles this week. My computer is about four years old, which is considered over the hill by computer standards. I already mentioned how the Visual Basic CD failed to run on my computer, and now I've having trouble with the Fischinger player.

I did create a quick new song for Fishinger player before it stopped working. Recall that the lowest four rows on the player are g', c', d', and e' on bottom row. Enter the following melody:

Melody: d'-d'd'd'e'g'c'g'-c'c'g'g'd'e' (a dash - means skip a column)

To this, I add a walking bass line. I've generated the melody at random, but I choose bass notes that sound good with this melody. Our walking bass line will use the top row (C) and fourth row (G), and will use only every fourth column, starting with the first column:

Bass: G---G---C---C---

I'm barely able to enter these notes and have the song repeat a few times before my computer grinds to a halt.

With my old computer struggling to play the Fischinger songs, this is discouraging me from trying to download more complex programs such as Zoom or Java, as I mentioned earlier. I'll still wait until my next post to discuss my software plans in full.

Sunday, April 26, 2020

Introduction to Coding

Table of Contents

1. Introduction
2. Reblogging -- and Finishing -- BlooP and FlooP and GlooP
3. Mocha & BASIC
4. The Logo Language
5. The Pascal Language
6. The TI-BASIC Language
7. C++ and Moving Forward with Java
8. A Few Math Problems
9. What About FlooP and GlooP?
10. Conclusion

Introduction

This is my 1100th post. And it is one of my most important posts to date, because this post will determine the future -- not just of my blog, but of my career.

As I mentioned before, as the school closures grow longer, I'm very concerned with what effect the coronavirus will have on education. My biggest fear is that many students and parents will choose to avoid attending school even in the fall. They might decide that the future of education is online. And if that happens, there'll be no need for subs like me -- and less need even for regular teachers. All that will be needed for education are computers.

On Friday, my stimulus check from the federal government arrived. But, as we all know, this will provide me with only temporary relief. If the schools don't reopen in the fall because parents and students are still worried about the virus, I'm going to need another source of income.

And that's what this post is all about. I'm seriously considering a career change. After all, teaching may a thing of the past, while computers are definitely a thing of the future. And if I want to be hired as a coder, I need to raise my coding skills.

Keep in mind that this is a Geometry blog, not a coding blog. Thus if at any point I'm hired for a computer (or any other non-teaching) job full-time, then that's the end of this blog. This blog might not make it to a 1200th post.

But until that actually happens, I'll keep blogging. As long as it's theoretically possible that I'll see the inside of a classroom again, this blog will remain.

In the meantime, I'll blog about my journey towards a possible career in computers. I believe that simply writing about it makes me more likely to remain focused on it. By blogging, there's someone out there who's holding me accountable -- you readers of this blog. That way, I'll keep focused as I gradually work towards my goal. And who knows -- maybe some of you readers are also teachers who are considering a new career in the wake of the virus outbreak.

Today's post is an introduction to the world of coding. It's all about exactly what computer language I will choose to learn independently -- which one is most likely to get me hired.

Reblogging -- and Finishing -- BlooP and FlooP and GlooP

It seems as if at least part of every post these days includes reblogging previous posts. Actually, this was always true even before the virus shutdown -- nearly every Geometry lesson is the same as that from the previous year, and so I'd usually reblog it. But now that schools are closed, I've been reblogging the non-Geometry parts of my old posts.

It turns out that one year ago today we were still in the middle of our side-along reading of Hofstadter's book. And it just so happens that the chapter for April 26th was on the three computer languages that Hofstadter invented -- BlooP and FlooP and GlooP.

Hey, that fits today's post, which is all about computer languages, like a glove! So why don't we go back and reblog that particular chapter?

Indeed, last year I wrote very little about the chapter since I'd spent too much time on the previous "Dialogue" (which was all about the Collatz conjecture). So this is my opportunity, at long last, to complete that chapter. (Therefore very little of this post will actually be reblogging.)

And in doing so, I'll introduce you to computer languages (for those of you not familiar). Once again, BlooP and FlooP and GlooP are not actual computer languages -- no business is going to hire me to become a FlooP coder. But at least I'll be able to compare the Hofstadter languages to real languages that work on real computers.

Without further ado, let's begin. This is what I wrote last year about this chapter:

Chapter 13 of Douglas Hofstadter's Godel, Escher, Bach is called "BlooP and FlooP and GlooP." It begins as follows:

"BlooP, FlooP, and GlooP are not trolls, talking ducks, or the sounds made by a sinking ship -- they are three computer languages, each one with its own special purpose. These languages were invented specially for this Chapter."

I'm sorry, but it just feels so weird to be reading this book again a year later. I've forgotten that Hofstadter regularly mentions two characters -- Achilles and the Tortoise -- from Zeno's paradox:

"The Tortoise can only make a record called 'I Cannot Be Played on Record Player X' provided that Record Player X is really a record player!"

Now I remember -- the crafty Tortoise was always trying to break record players! Meanwhile, the author is trying to teach his readers about formal systems:

"We saw at an early stage that formal systems can be difficult and unruly beasts because they have lengthening and shortening rules, which can possibly lead to never-ending searches among strings."

Then Hofstadter suddenly quotes another Dialogue, from JM Jauch. Here one character, Salvati, is trying to explain to Sagredo that the series 1 - 1/3 + 1/5 - 1/7 + 1/9 - ... converges to pi/4 = 0.78539...:

Sacredo: This is a marvelous idea! It suggests that when we try to understand nature, we should look at the phenomena as if they were messages to be understood.

Let's just hurry up and get to BlooP, since that's what I'm trying to focus on here:

"Our topic will be searches for natural numbers which have various properties. In order to talk about the length of any search, we shall have to define some primordial steps, out of which all searches are built, so that length can be measured in terms of numbers of steps. Some steps which we might consider primordial are:

adding any two natural numbers;
multiplying any two natural numbers;
determining if two numbers are equal;
determining the larger (smaller) of two numbers.

"If we try to formulate a test for, say, primality in terms of such steps, we shall soon see that we have to include a control structure -- that is, descriptions of the order to do things in, when to branch back and try something again, when to skip over a set of steps, when to stop, and similar matters."

I will write Hofstadter's BlooP example:

DEFINE PROCEDURE "TWO-TO-THE-THREE-TO-THE" [N]:
BLOCK 0: BEGIN
     CELL(0) <= 1;
     LOOP N TIMES:
     BLOCK 1: BEGIN
          CELL(0) <= 3 X CELL(0);
     BLOCK 1: END;
     CELL(1) <= 1;
     LOOP CELL(0) TIMES:
     BLOCK 2: BEGIN
          CELL(1) <= 2 X CELL(1);
     BLOCK 2: END;
     OUTPUT <= CELL(1);
BLOCK 0: END.

The author now explains how this code works:

"However, I hope that this algorithm is simple enough that it makes sense without too much scrutiny. A procedure is defined, having one input parameter, N; its output is the desired value.

"The strategy of the above algorithm is as described earlier. You begin by taking an auxiliary variable, called CELL(0); you set it initially to be , and then, in a loop, you multiply it repeatedly by 3 until you've done so exactly N times. Next, you do the analogous thing for CELL(1) -- set it to 1, multiply by 2 exactly CELL(0) times, then quit. Finally, you set OUTPUT to be the value of CELL(1). This is the value returned to the outside world -- the only externally visible behavior of the procedure.

"A number of points about the notation should be made here. First, the meaning of the left-arrow '<=' is this:

Evaluate the expression to its right, then take the result and set the CELL (or OUTPUT) on its left to that value.

"Every procedure in BlooP, when called, yields a value -- namely the value of the variable called OUTPUT. At the beginning of execution of any procedure, it is assumed as a default option that OUTPUT has the value 0. That way, even if the procedure never resets OUTPUT at all, OUTPUT has a well-defined value at all times."

Hofstadter's next BlooP example is subtraction -- only addition is primordial, so how do we subtract?

DEFINE PROCEDURE "MINUS" [M,N]:
BLOCK 0: BEGIN
     IF M < N THEN:
     QUIT BLOCK 0;
     LOOP AT MOST M + 1 TIMES:
     BLOCK 1: BEGIN
          IF OUTPUT + N = M, THEN:
          ABORT LOOP 1;
          OUTPUT <= OUTPUT + 1;
     BLOCK 1: END;
BLOCK 0: END.

The author now explains how this code works:

"Here we are making use of the implicit feature that OUTPUT begins at 0. If M is less than N, then subtraction is impossible, and we simply jump to the bottom of BLOCK 0 right away, and the answer is 0. That is what is meant by the line QUIT BLOCK 0. but if M is not less than N, then we skip over that QUIT-statement, and carry out the next command in sequence (here, a LOOP-statement). That is how IF-statements always work in BlooP."

We've discussed IF-statements on the blog before. That's because they're featured in Lesson 2-3 of the U of Chicago text.

"Notice that there are two distinct instructions for jumping downwards: QUIT and ABORT. The former pertains to blocks, the latter to loops. QUIT BLOCK n means to jump to the last line of BLOCK n, whereas ABORT LOOP n means to jump just below the last line of BLOCK n. This distinction only matters when you are inside a loop and want to continue looping but to quit the block this time around. Then you can say QUIT and the proper thing will happen."

We learn that BlooP supports automatic "chunking," and this is explained by an analogy:

"You might compare it to the way a good ice skater acquires new motions: not by defining them as long sequences of primordial muscle-actions, but in terms of previously learned motions, which were themselves learned as compounds of earlier learned motions, etc. -- and the nestedness, or chunkedness, can go back many layers until you hit primordial muscle-actions."

Hofstadter's next BlooP example is on testing for primality:

DEFINE PROCEDURE "PRIME?" [N]:
BLOCK 0: BEGIN
     IF N=0 THEN:
     QUIT BLOCK 0;
     CELL(0) <= 2;
     LOOP AT MOST MINUS [N,2] TIMES:
     BLOCK 1: BEGIN
          IF REMAINDER [N,CELL(0)] = 0, THEN:
          QUIT BLOCK 0;
          CELL(0) <= CELL(0) + 1;
     BLOCK 1: END;
     OUTPUT <= YES;
BLOCK 0: END.

The author now explains how this code works:

"Notice that I have called two procedures inside this algorithm: MINUS and REMAINDER. (The latter is presumed to have been previously defined, and you may work out its definition yourself.) Now this test for primality works by trying out potential factors of N one by one, starting with 2 and increasing to a maximum of N - 1. In case any of them divides N exactly (i.e, gives remainder 0), then we jump down to the bottom, and since OUTPUT still has its default value at this stage, the answer is NO. Only if N has no exact divisors will it survive the entirety of LOOP 1; then we will emerge smoothly at the statement OUTPUT <= YES, which will get executed, and then the procedure is over."

So far all we've seen are procedures. But, as we soon find out, whole BlooP programs contain chains of procedures:

"Thus, an example of a full BlooP program would be the definition of the procedure TWO-TO-THE-THREE-TO-THE, followed by the call

TWO-TO-THE-THREE-TO-THE [2]

which would yield an answer of 512."

Hofstadter's next BlooP example is on testing for Goldbach's conjecture:

DEFINE PROCEDURE "GOLDBACH?" [N]:
BLOCK 0: BEGIN
     CELL(0) <= 2;
     LOOP AT MOST N TIMES:
     BLOCK 1: BEGIN
          IF {PRIME? [CELL(0)]
            AND PRIME? [MINUS [N,CELL(0)]]}
              THEN:
          BLOCK 2: BEGIN
               OUTPUT <= YES;
               QUIT BLOCK 0;
          BLOCK 2: END
          CELL(0) <= CELL(0) + 1;
      BLOCK 1: END;
BLOCK 0: END.

We discussed Goldbach's conjecture in Lesson 2-2 of the U of Chicago text.

At this point we'll stop reading Hofstadter's Chapter 13, since his purpose for discussing BlooP (and FlooP and GlooP) is to explain with a primitive recursive function is -- basically, it's a function that can be computed by BlooP. Instead, let's compare BlooP to some other languages that I've coded in.

Mocha & BASIC

The first language I've ever coded in was, of course, BASIC. I was about four or five years old when I received my first computer -- the one that the Mocha emulator is based on. I believe that I only did a little programming in BASIC that year. I moved to a different apartment just before my kindergarten year, and the computer remained packed away until at least a year after the move. Thus I believe that I was in either first or second grade when I began learning BASIC in earnest.

Because BASIC is mentioned in the U of Chicago text -- and because I often link to Mocha from this blog -- you readers should be somewhat familiar with it. So let's try converting some of our BlooP programs to BASIC. (Yes -- I'm actually coding in Mocha for something other than music.)

https://www.haplessgenius.com/mocha/

We'll start with our "two to the three to the" program:

10 INPUT N
20 A=1
30 FOR X=1 TO N
40 A=3*A
50 NEXT X
60 B=1
70 FOR X=1 TO A
80 B=2*B
90 NEXT X
100 PRINT B

When ready, we type in RUN to execute the program.

So Lines 20-90 correspond to the BlooP procedure above. To make a complete program, we start with Line 10, which asks the user to input a value for N. The final output is then printed on Line 100. In BASIC, all lines must be numbered, whereas in BlooP, only the blocks (0 and 1) are numbered.

As double exponentials increase so quickly, this program is only valid when N equals 1, 2, 3, or 4. If we try to input 5 for N, we'll see the following error message:

?OV ERROR IN 80

which stands for "overflow."

As it turns out, exponentiation is primordial in Mocha BASIC. We just use the up-arrow key (the actual arrow key, although it appears as a caret ^ here on the blog). If we want, we can enter in the immediate mode (that is, without coding a full program):

PRINT 2^(3^1)

Exponentiation is left-associative in Mocha, so we must use parentheses. Of course, it doesn't matter much when the exponent is 1, but if we use 2 for N, our answer will be much too low unless we include parentheses:

PRINT 2^(3^2)

Notice that the output here is:

512.000002

instead of the expected 512. This is because for primordial exponentiation, Mocha internally uses logarithms, which aren't exact. The error is more glaring when we try 3 for N:

134217729

which can't be exact, since a power of two can't possibly be odd (except for 2^0 = 1). In fact, the best way to avoid this error is simply to avoid the primordial exponentiation and just use the 10-line program we coded above.

Let's try the minus program next. Since we wish to enter a new program, we type in:

NEW

And here's the program:

10 INPUT M,N
20 IF M<N THEN END
30 FOR X=0 TO M
40 IF X+N=M THEN PRINT X: X=M
50 NEXT X

There are a few things going on here. First, we must input two numbers here, M and N. We can either enter both of them separated by a comma. If we enter only one number, a double ?? appears as the computer waits for the second input.

I wanted to illustrate the difference between QUIT and ABORT, by having the QUIT line end of the program and the ABORT line terminate the loop by setting X to the maximum value of the loop, M

Technically, a BlooP program might contain lines after a QUIT or ABORT that BASIC would need to skip over, so there should actually be GOTO lines in the BASIC version here. I decided not to use these since I don't want our program to be too cluttered.

Also, the BlooP program outputs the default value 0 when M is less than N. For BASIC, I decided not to have Mocha print anything in this situation. The zero is printed only when the actual difference is 0 (that is, when M equals N).

There's nothing like AT MOST in BASIC. Instead, the FOR loop of BASIC has a variable, X, that keeps track of how many times we've gone through the loop. Thus OUTPUT <= OUTPUT + 1; is not needed -- the NEXT X line automatically increments the value of X. And once again, instead of an ABORT line, we end the loop by setting X to the maximum value M.

In most real computer languages, including BASIC, minus is primordial (and there's no rounding error associated with integer subtraction, as opposed to exponentiation) -- and it also returns negative values if necessary. An actual BASIC coder would use the primordial minus, not this above program.

Let's try the prime program next:

NEW
10 INPUT N
20 IF N<2 THEN PRINT "NO": END
30 IF N=2 THEN PRINT "YES": END
40 FOR X=2 TO N-1
50 IF N/X=INT(N/X) THEN PRINT "NO": END
60 NEXT X
70 PRINT "YES"

There are a few changes I made here when converting from BlooP to BASIC. First of all, Hofstadter treated 0 as a special case, but he forgot about 1 as a special case. His program will output "YES" even though 1 is not prime. I fix it here by changing N=0 to N<2 in this program.

But as it turns out, we need to treat 2 as a special case as well. (This is not Hofstadter's error -- the problem here lies with Mocha BASIC, not BlooP.) In BlooP (and many real languages), it's possible for the number of iterations of a loop to be 0. If we input 2 to the BlooP version, the loop will iterate 0 times (that is, 2 minus 2), and so the entire loop is skipped, taking us straight to "YES." But in BASIC, the loop FOR X=1 TO N runs once even if N is zero. The computer doesn't check that 1 is more than 0 until after iterating the loop once. Thus BASIC would run the loop, find that 2 does divide 2 evenly, and then print "NO." We avoid this by including 2 as a special case in Line 30.

(Note: The zero problem also occurs in the double exponentiation problem. If we input 0, BASIC returns 8, even though 2^(3^0) is actually 2.)

The BlooP procedure calls two other procedures, MINUS and REMAINDER -- here Hofstadter wanted to illustrate what it means for procedures to call each other. But this isn't the best example in BASIC to demonstrate this. Instead, we just use primordial minus for subtraction.

Meanwhile REMAINDER isn't primordial in either BlooP or BASIC. Most BASIC coders would implement REMAINDER by using primordial division and INT instead. This gives the integer part of a number. If the quotient equals its own integer part, this means that the fractional part (and hence the remainder of that division) is 0.

OK, let's tackle the Goldbach program:

NEW
10 INPUT M
20 IF M<4 THEN PRINT "NO": END
30 FOR Y=2 TO M/2
40 N=Y
50 GOSUB 130
60 A=C
70 N=M-Y
80 GOSUB 130
90 B=C
100 IF A AND B THEN PRINT "YES": END
110 NEXT Y
120 PRINT "NO": END
130 IF N=2 THEN C=-1: RETURN
140 FOR X=2 TO SQR(N)
150 IF N/X=INT(N/X) THEN C=0: RETURN
160 NEXT X
170 C=-1: RETURN

This time, I do want to demonstrate one procedure calling another -- in this case, the Goldbach procedure calling the primality procedure. This is represented in BASIC by GOSUB, which is similar to GOTO,-- except that instead of END, we RETURN to the main procedure.

In out main procedure, I decided to return "NO" early if the input is less than four, since the smallest possible sum of primes is 4. Then since I want to save some time, the length of the loop is only up to half the input, rather than the input itself. This is because if two numbers have a certain sum, one addend is at most half the sum while the other is at least half the sum. Thus if we're trying to find a Goldbach partition for 98 and make it all the way to 49 without finding a solution, we can be certain that there is no solution.

Now here's the most important about one procedure calling another. Notice that in BlooP, the Goldbach problem has a parameter N and stores values in CELL(0). It calls the primes procedure -- which itself has a parameter N and its own CELL(0). Clearly, these are different -- for example, if we're finding a Goldbach partition for 98, N for Goldbach is 98, while CELL(0) for Goldbach takes on values 2, 3, 4, 5, 6, and so on. During each iteration, Goldbach's CELL(0) becomes Prime's N
while Prime's CELL(0) takes on different values. When Prime's N is 15, its CELL(0) takes on the value 2, and then 3, as it hunts for factors of 15. And all of this happens without changing Goldbach's N from 98 or its CELL(0) from 15.

Hofstadter never explains this in his book, but apparently, BlooP uses variables that are local to the procedures in which they are found. If a procedure has a parameter, then it's a local variable, and any time a CELL appears in a procedure, it's a local variable as well. Thus different procedures can have local variables with the same names without affecting their values in other procedures.

But BASIC uses only global variables. Thus we can't simply use the same N and X (which is what we've been using for CELL(0) in our BASIC programs). Here, I decided to use M and Y in the Goldbach part of the program and then switch to N and X for the primality part.

This explains what Lines 40 and 70 do. They set N to the correct value needed for the primality part, which begins as soon as GOSUB 130 is reached.

The subroutine in Lines 130-170 match Lines 30-70 of the original primes program. We dropped the old Line 10 because we don't want the user to keep entering values for N -- it's the main part of the program that determines what N needs to be. We also dropped the old Line 20 because the special cases 0 and 1 no longer need to be considered -- the new Lines 20-40 guarantee that N never needs to be less than 2. But since N might need to be 2, the special case for 2 remains in Line 130.

The loop in Line 140 has also been shortened -- just as the loop in Line 30 only needs to go up to M/2, the loop in Line 140 only needs to go up to the square root of N. Just as one addend of M must be at most M/2, one factor of N must be at most SQR(N). This significantly saves us time, especially since we may need to determine whether large numbers are prime.

Just as we don't want the user to INPUT values of N for the prime portion, we don't want Mocha to PRINT the words "YES" or "NO" for each factor. So instead, we use C as our OUTPUT value, except that we use special codes that BASIC recognizes -- 0 means "NO" and -1 means "YES." The values -1 and 0 automatically work with IF and AND in Line 100, just as YES and NO automatically work in the BlooP version. These are called Boolean values -- and a procedure that returns a Boolean value is called a "predicate."

By the way, if a number has a Goldbach partition, we see the word "YES" on the screen -- but we don't know exactly what the two prime addends are. But notice that at the time we reach the END, one of the addends is still sitting in Y and the other is in N (since it's the last prime we found). Thus after the program is done, the command PRINT Y,N displays the two prime addends.

OK, so now you understand the entire BASIC version. There are other ways to shorten the run time for this program, including taking advantage of parity (that is, evenness and oddness).

First of all, 4 is the only even number whose prime addends are both even (2). So we can consider 4 to be a special case at the start of the program (automatically say "YES"). And if an odd number has a Goldbach partition, one of the primes must be 2. So we can just test M-2 for primality and be done with it.

For all even numbers greater than 4, both prime addends are odd -- and any factor of a composite odd number is itself odd. So the prime test only needs to test odd divisors, which we do by changing the loop in Line 140 to FOR X=3 TO SQR(N) STEP 2. This time, 3 needs to be the special case in Line 130 instead of 2, since once again, we can't have a FOR loop that repeats zero times.

Another line that can be changed is Line 90 -- just delete it and have 100 say IF A AND C, since C still contains the primality of the second addend. I kept it here because it makes the program easier to understand, but if the focus is on saving time in dealing with large numbers, then we should skip it.

Of course, there's one more way to save significant time. We know that Goldbach's conjecture has been verified for all even numbers up to 10^20 -- and as we already saw by finding 2^(3^4) in the first problem above, numbers in this range are shown approximately in scientific notation. Thus for all even numbers that can be expressed exactly in Mocha (and hence are the sums of numbers for which primality can be determined), Goldbach's conjecture is true. So just have the computer print "YES" if the number is even (and at least 4).

That would be cheating, of course. And besides -- if we want to find the specific addends which give the even number, we should just run the program and print the values of Y and N afterward.

The fact that BlooP has local variables makes it more powerful than BASIC. It means that procedures can be black boxes to their users -- we don't care how they work, but only that they work.

Indeed, the primes program in BlooP uses a procedure called REMAINDER, but Hofstadter never explains how it works. Thus to us, it is a black box. Now if BlooP didn't have local variables, we might suddenly be concerned -- "Is either parameter named N, which might clash with the parameter
named N in PRIME? Does REMAINDER use CELL(0), which is also needed in PRIME?" Since the variables are local, we don't have to worry that the remainder procedure uses N or CELL(0), because they'd be distinct from the variables in the primality procedure anyway.

But since all variables in BASIC are global, it makes a huge difference exactly which variables are used in which procedure. There can be no black boxes in BASIC. This is why BASIC is considered inferior to BlooP and other languages.

The Logo Language

The second computer language that I learned was Logo. This is what it meant to grow up in the 1980's -- BASIC and Logo were prevalent during that decade.

And while BASIC dominated home computers, Logo dominated school computers. As I've said before on the blog, my second grade teacher -- who was also my fifth grade teacher -- was quite strong at teaching math. But computers were also her specialty. During the Path Plan, the elective she taught was a computer class where we learned about Logo. It was held in the computer lab -- where most school computers were in the 1980's and early 1990's.

But there were opportunities to learn Logo well before Path Plan began. When I had this teacher in second grade, it was actually a Grades 2-3 combination class. Two of the other second and third grade teachers were so impressed by her knowledge of Logo that they asked her to take their classes to the computer lab as well. In return, those other teachers took our class -- one of them would teach us art, while the other taught vocal music. (The art teacher would go on to be my third grade teacher, and she also gave me some piano lessons on the side.)

While most of my classes began Logo in the second grade, I actually had the opportunity to use it in the first grade. Because I was so far above grade level in math, my first grade teacher let me take a special computer class in another room once a week. No, it wasn't my future second grade teacher (although she would give me some special reading lessons at another time), and no, it wasn't in the computer lab either. There was one classroom across the hall that actually had a few computers, and they were installed with another version of Logo, called Logo Writer. And these computers were in color, as opposed to the black/white (actually black/green) computers in the lab. The other students in the class were in Grades 5-6, thus marking the largest grade level difference the other students and me in a class I took.

And so even though I call BASIC my first language and Logo my second, I probably learned the nuts and bolts of both languages at the same time, around the first grade.

We usually associate Logo with turtle graphics -- and naturally, when we studied Logo in elementary school, it was nothing but turtle graphics. It wasn't until I stumbled upon Brian Harvey's website that I learned that Logo was a full-fledged language that was about more than just a turtle:

https://people.eecs.berkeley.edu/~bh/logo.html

And it's because of this that we can actually translate Hofstadter's BlooP programs into Logo. By the way, recall that Logo's turtle graphics are also mentioned in Lesson 13-8 of the U of Chicago text -- and I linked to a Logo interpreter for Lesson 13-8 last year:

https://www.calormen.com/jslogo/

Let's start with our double exponentiation:

to twotothethreetothe :n
local "cell0
make "cell0 1
for [i 1 :n] [make "cell0 3*:cell0]
local "cell1
make "cell1 1
for [i 1 :cell0] [make "cell1 2*:cell1]
output :cell1
end
print twotothethreetothe 2

This gives the correct value 512 as expected. It also works for 1, 3, 4, 5, and 6 -- two values more than BASIC can. Meanwhile, it still gives the wrong value for 0 -- and this time, it's even worse than BASIC, as Logo gives 512.

We notice that both Logo and BlooP have use output for, well, the output. Also, we observe that Logo, like BlooP, has local variables, since we actually see the word local in the code! I decided to use the names cell0 and cell1 for the variables, though I could have given them other names.

In the for loop, we see the command make "cell0 3*:cell0, with different punctuation to mean "make (the new variable) cell0 equal to 3 times the (old value of) cell0." 

We move on to the next example, minus. Click "clear" in the lower-right corner and then enter in:

to subtract :m :n
if :m<:n [output 0]
for [i 0 :m] [if :i+:n = :m [output :i]]
end
print subtract 7 2

Notice that like BASIC, the Logo for loop is associated with a variable, in this case i, that keeps track of how many times we've gone through the loop. We don't use output until we're ready to output -- and it doesn't have a default value, so we must explicitly output 0.

I changed the name of this procedure to subtract. That's because minus is already primordial in Logo, although it refers to unary minus (to enter negative numbers). We can indicate primordial subtraction just as you'd expect, so print 7-2 outputs 5. Oh, and Logo coders don't use the word "primordial" -- the preferred word is primitive.

Our next example is the primes predicate. This time, let's follow Hofstadter and have prime invoke subtract -- to do this, we'll delete that last line print subtract 7 2 and insert the following text in its place:

to prime? :n
if :n<2 [output false]
if :n=2 [output true]
for [i 2 subtract :n 1] [if (remainder :n :i) = 0 [output false]]
output true
end
print prime? 7

All of the variables are local, so we don't need to worry about both subtract and prime containing the same parameters n and iFortunately, remainder is primordial/primitive in Logo, so we don't need to write it.

Meanwhile, the official Boolean values in Logo are true and false. These are equivalent to the numbers 1 and 0. (BASIC is the only language I know where true is -1 instead of 1.) Whenever we try to print such a value, it prints as 1 or 0 (so 1 means "prime," 0 means "composite"). But we have to write it this way in order to feed the Boolean output to the Goldbach procedure.

(This also marks a difference between Brian Harvey's "Berkeley Logo" and the Calormen interpreter linked above. In Berkeley Logo, "true and "false require quotes and are printed as the actual words "true" and "false.")

Speaking of which, let's write our Goldbach example:

to goldbach? :n
if :n<4 [output false]
for [i 2 :n] [if and prime? :i prime? subtract :n :i [output true]]
output false
end
print goldbach? 30

This looks almost like the BlooP example. The main difference is that in Logo, and occurs before the following two predicates, not in between as in BlooP or BASIC.

The Pascal Language

For most of my childhood, BASIC and Logo were the only languages I knew. BASIC was my main language at home, and Logo was the dominant language at school.

At some point during my junior or senior year in high school, the AP Computer Science exam, which had been based on Pascal, changed to to C++. Although I never took AP Comp Sci, I did notice that the old Pascal texts were given away in my school for free.

And so Pascal was the third language that I learned. And even more so for Pascal than some other languages that I would learn later on, I had no opportunity to write any code on an actual computer, so all I know about Pascal is theoretical.

Even though Brian Harvey's website is mostly about Logo, he does have a Pascal page -- indeed, here he compares Pascal to his preferred language:

https://people.eecs.berkeley.edu/~bh/v3ch4/langd.html

Much of Hofstadter's BlooP reminds me of Pascal. Both languages have blocks that the user can both begin and end. Both languages have procedures -- except that the procedures that Hofstadter writes in BlooP are more accurately called functions in Pascal. Like algebraic functions, Pascal functions have an input and an output.

In both BlooP and Pascal, every line ends with a semicolon, and there's a period at the very end. But the Pascal period only comes at the end of the whole program -- each function ends with a semicolon.

Let's try our double exponentiation function in Pascal:

function twotothethreetothe(n:integer): integer;
  var cell0, cell1, i: integer;
  begin
    cell0 := 1;
    for i := 1 to n do
      cell0 := 3 * cell0;
    cell1 := 1;
    for i := 1 to cell0 do
      cell1 := 2 * cell1;
    twotothethreetothe := cell1
  end;

Now that I've written it out, we can see a few more differences between BlooP and Pascal. Yes, both languages have begin and end blocks, but if a for loop contains only a single statement, there's no need to begin and end the block. We also notice that instead of output like BlooP or Logo, the name of the function itself is assigned the output.

If we wanted to include this in a full Pascal program, it would look like this:

program calculate;

function twotothethreetothe(n:integer): integer;
  var cell0, cell1, i: integer;
  begin
    cell0 := 1;
    for i := 1 to n do
      cell0 := 3 * cell0;
    cell1 := 1;
    for i := 1 to cell0 do
      cell1 := 2 * cell1;
    twotothethreetothe := cell1
  end;

begin
  writeln(twotothethreetothe(2))
end.

And there's that period, because that's the end of the program. By the way, a Pascal program must go through a compiler before it is executed. The compiler converts the program into machine language, and if the program contains errors, it won't compile.

Here I just stuck to letting n be 2, because I'm not quite sure how high n can be. All variables here are considered to be of type integer, but I don't know how high integers can be. Other languages distinguish between "short" and "long" integer variables. For short variables, 2^(3^2) is the highest value this function can output, but for "long" variables, 2^(3^3) is the maximum. (The value 2^(3^4) that BASIC gave us is often called a "float" in other languages, while Logo's 2^(3^6) corresponds to the "double.")

Let's try out other BlooP programs in Pascal. We proceed with minus -- here, we'll only write the functions and avoid writing a driver program:

function minus(m,n:integer): integer
  var i: integer;
  begin
  if m < n then
    minus := 0
  else
    for i := 0 to m do
      if i+n = m then
        minus := i
  end;

Unlike Logo, simply assigning to the name of the function doesn't end the function (that is, if I recall Pascal correctly). In this sense, Pascal is more like BlooP, where in Hofstadter's version of minus, he assigns to output without the procedure ending. Instead, he used quit and abort -- which may or may not exist in Pascal. (I suspect one of them might exist, but I don't remember it.) I definitely know that there's nothing like at most in Pascal.

For the first if, I was able to add an else section so that the for loop only executes when the condition m < n is false, so that simulates the BlooP quit. But adding a second else to the inner if can't abort the loop the way we can in BlooP, so the loop continues running.

Pascal coders who want to have a loop end early wouldn't use a for loop, which must be executed a predetermined number of times (if they are executed at all). Instead, they might use a while loop:

function minus(m,n:integer): integer
  var i: integer;
  begin
  i := 1;
  minus := 0;
  if m > n then
    while (minus := 0) do
      if i+n = m then
        minus := i
      else
         i := i+1
  end;

This program avoids iterating the loop more times than necessary. The loop doesn't even start unless we have m > n. And it stops as soon as minus is assigned a nonzero value. (You might ask, what would happen to the loop if the answer of the subtraction really is zero? Well, that can't happen unless m = n -- and we don't even enter the loop unless m > n.)

Of course, like most languages, Pascal has a primitive subtraction operation, so this minus function isn't necessary.

Let's move on to prime? This returns true or false, and so we use Pascal's Boolean type. But since we can't include question marks in function names, it's traditional to use the word "is" at the start of the function name:

function isprime(n:integer): boolean
  var i: integer
  begin
    i := 2
    isprime := true
    if n > 1 then
      while ((i < n) and isprime) do
        if n mod i = 0 then
          isprime := false
        else
          i := i+1
    else
    isprime := false {n is 0 or 1}
   end;

Remainder is primitive in Pascal, where it is called mod.

I forget whether a for loop can be executed 0 times in Pascal. (We know that it's executed at least once in BASIC and Logo.) But I definitely know that a while loop can be executed 0 times -- and this is exactly what happens when n is 2. The opening while fails as 2 is not less than 2 -- this means that isprime correctly remains true, so we no longer need to treat 2 as a special case.

Of course, 0 and 1 remain special cases. The final else handles 0 and 1 -- here I added a comment in braces (that's ignored by the Pascal compiler) to make it clearer to the readers.

Let's finish up with our Goldbach function:

function isgoldbach(n:integer): boolean
  var i: integer
  begin
    i := 2
    isgoldbach := false
    while ((i < n) and (not isgoldbach)) do
      if isprime(i) and isprime(minus(n,i)) then
        isgoldbach := true
      else
        i := i+1
  end;

It's also possible to write this program using another type of loop -- a repeat loop, which always executes at least once, and it keeps looping until the condition is true. (Keep that straight -- if it's a while loop, it loops when it's true, and if it's repeat, it loops when it's false.) Whenever I see the word while followed by not, it looks better as a repeat loop:

function isgoldbach(n:integer): boolean
  var i: integer
  begin
    i := 2
    isgoldbach := false
    repeat
      if isprime(i) and isprime(minus(n,i)) then
        isgoldbach := true
      else
        i := i+1
    until((i > n) or isgoldbach)
  end;

Of course, since this repeat loop executes at least once, we must make sure that it doesn't fail when n is a tiny number. For example, when n is 2, the single loop checks whether 2 and 0 are prime. Since 0 isn't prime, the test fails and isgoldbach remains false, as it should. Thus in this program, the single iteration doesn't cause isgoldbach to return incorrect values. (This is different from my BASIC version, where I treated anything less than 4 as a special case in order to avoid 0 and 1.)

The TI-BASIC Language

At around the same time I learned about Pascal, I acquired my first TI graphing calculator, a TI-82. I noticed that the calculator had a PRGM (program) key, and so coding is possible here as well.

The language of the TI calculator is often called TI-BASIC. Despite this name, it has only some similarities with BASIC. Indeed, its lines aren't numbered, and it has while loops that BASIC lacks.

The main thing TI-BASIC does have in common with its namesake is that all variables are global. It's because of this limitation that our TI programs will have to be similar to out BASIC programs as far as naming variables are concerned.

Let's start once again with double exponentiation:

PROGRAM:TWOTO3TO
:Input N
:1->A
:For(X,1,N)
:3A->A
:End
:1->B
:For(X,1,A)
:2B->B
:End
:B

This program works for N up to 5 -- 6 causes an overflow. Oh, and For loops can be executed 0 times on the TI, and so entering 0 for N gives the correct value 2.

Here is our minus program:

PROGRAM:MINUS
:Input M
:Input N
:If M<N
:Then
:Disp 0
:Stop
:End
:For(X,0,M)
:If X+N=M
:Then
:Disp X
:Stop
:End
:End

For this program, I represent BlooP's QUIT and ABORT by using Stop. But keep in mind that TI's Stop ends the entire program, not just a block or loop.

Meanwhile, TI's Then is a bit more like Pascal's then begin. If the then block contains only a single statement, then we don't need to use the word then. For example, suppose I decided that whenever M<N I don't print anything at all, so the Disp 0 line isn't needed, only Stop. Then we can leave out both the Then line before it and the End line after it.

On the other hand, For always requires End even if only a single line needs to be looped. There are two Ends at the end of this program because both For and Then need an End.

Let's tackle the prime program next:

PROGRAM:ISPRIME
:Input N
:If N<2
:Then
:Disp 0
:Stop
:End
:For(X,2,N-1)
:If N/X=int(N/X)
:Then
:Disp 0
:Stop
:End
:End
:1

Since TI-BASIC doesn't have a Remainder primitive, we use Int just as we did in BASIC. And since TI's For loop can run 0 times, we don't need to treat 2 as a special case.

And now we try the trickiest one of all, the Goldbach program. Here is where TI is just as limited as BASIC was earlier -- all variables are global, and we can't have Goldbach call ISPRIME. (Actually, the TI does let us call a program from another program, but I won't show it here.)

So we'll do something similar to BASIC where we just use some of the same lines from ISPRIME in the Goldbach program. But then there's another problem -- ISPRIME contains Stop lines, but these stop lines would halt the entire Goldbach problem when we'd just like to halt the current loop.

Fortunately, TI-BASIC has While and Repeat loops just like Pascal. And since our Pascal version of Goldbach used REPEAT, we'll use that in our TI version as well:

PROGRAM:GOLDBACH
:Input M
:If M<4
:Then
:Disp 0
:Stop
:End
:If M<7
:Then
:Disp 1
:Stop
:End
:2->Y
:Repeat (Y>M/2) or G
:1->G
:For(N,Y,M-Y,M-2Y)
:For(X,2,v(N))         (Note to readers: that v is actually a square root symbol!)
:If N/X=int(N/X)
:0->G
:End
:End
:Y+1->Y
:End
:G

In this program, I treat M less than 4 as a special case (just as we did in BASIC), so that we don't need to consider values of 0 or 1 in the primality part of the code.

The variable G contains 0 if no Goldbach partition has been found, and 1 if it has. The Repeat loop stops as soon as either G becomes 1 or Y exceeds M/2.

Inside the Repeat loop is a strange-looking For(N,Y,M-Y,M-2Y) loop. The reason for this loop is that for Goldbach, we need to test that both Y and M-Y are prime. Rather than copy the same code twice for both primality tests, we just use the For loop. The step size of this For loop is M-2Y, which is the distance from Y to M-Y. We only need a single value of G despite their being two primality tests -- if Y isn't prime, G is set to 0, and remains at 0 heedless of the primality of M-2Y. So only if both primality tests pass can G remain at 1 and a successful partition be reported.

Since we're not printing the results of the primality tests, If contains only a single statement, and so it needs neither Then nor End. The first two Ends near the end of the program close the For loops, and the last End closes the Repeat.

But there are still two problems here. First of all, if Y isn't prime, then the TI still wastes time finding whether M-Y is prime anyway.

And more importantly, if Y=M/2, then Y equals M-Y and the step size for For is 0. This causes an immediate error -- the step size can't be zero, even if the start and stop values are equal.

To solve this problem, I decided to treat 4 and 6 as special cases as well, since their Goldbach partitions are 2+2 and 3+3 respectively. But this is admittedly an inelegant kludge -- it assumes that if an even number greater than 6 has a Goldbach partition, then it has one where the two addends are different (such as 10, which is 3+7 as well as 5+5), so that Repeat ends before reaching Y=M/2. (If M is odd, then M/2 isn't a whole number, so there's nothing to worry about.)

A better way to avoid this kludge is to replace For with another Repeat loop:

PROGRAM:GOLDBACH
:Input M
:If M<4
:Then
:Disp 0
:Stop
:End
:2->Y
:Repeat (Y>M/2) or G
:1->G
:Y->N
:Repeat (N=Y) or not(G)
:For(X,2,v(N))
:If N/X=int(N/X)
:0->G
:End
:M-N->N
:End
:Y+1->Y
:End
:G

Here's how the new Repeat loop works -- it starts out with N=Y. Since it's a Repeat loop, it runs at least once. At the end of the first loop, N has been replaced by M-N (that is, M-Y), and so the loop runs a second time. At the end of the second loop, N becomes M-N, which takes it back to Y. This satisfies the Repeat condition N=Y, and so there is no third loop.

This time, not only is there no error when Y=M/2, but as Y now equals M-Y, the Repeat condition is satisfied after the first loop, so we don't waste time testing its primality a second time. And the condition or not(G) is there to avoid the first problem above -- if Y is composite, then we don't waste time testing the primality of M-Y.

Of course, it's also possible to have programs call each other as subroutines. But we'd have to make a few changes to ISPRIME so that it won't keep printing all of the primality tests. The usual way to do this is with a helper program -- the new ISPRIME takes care of input and output only, while the helper program does the actual primality test. Then both ISPRIME and GOLDBACH can call the helper program for their respective primality tests. This would also eliminate the need for the special case M<4 in the Goldbach program.

By the way, some TI users might wonder why Repeat acts so strangely. Not only isn't it like Logo's repeat (which is followed by the number of times we want the loop to repeat), but for some reason, it loops when the condition N=Y is false -- and on the first loop N=Y is true, yet it loops anyway. Both of these occur because TI's Repeat is based on Pascal's, and that's how Pascal's loops work. (The use of the word until at the end of the loop makes Pascal's loop structure more obvious, but TI doesn't have until.)

C++ and Moving Forward with Java

When I arrived at UCLA, I took a PIC 10A class, which used the language C++. C++ is similar to Pascal, except that C++ is what's known as an object-oriented languages. This means that most of the coding involves different types of objects that we create, and methods/functions that act upon those various objects. If I were to post C++ versions of the BlooP programs above, they wouldn't differ significantly from the Pascal versions, since the only objects we'd use are integers and Booleans.

An author named Jesse Liberty has posted a C++ text that's similar to Brian Harvey's Logo text:

http://101.lv/learn/C++/

I learned other languages at UCLA, including Scheme. (Scheme is related to Logo, as both of them are considered dialects of Lisp.) But I mastered none of those as well as C++. I described on the blog the time when I passed the written exam in Scheme, but had trouble writing code that actually made the computer work. I always had trouble keeping track of all the parentheses (since as they say, Lisp stands for "lists in silly parentheses").

Even in C++, I excelled in PIC 10A, but my grades slipped as I proceeded through 10B and 10C. I still recall one project (on text compression) in 10C that I had so much trouble getting it to compile (that is, I kept getting compile-time errors). In the end, I submitted a file that contained mostly stubs (short functions that do nothing, and are there for the sole purpose of getting it to compile). But somehow, my final grade for the project was 8/10 (whereas it would have been 0/10 if my work didn't compile at all). So I passed 10C, but passing a class isn't the same as being a programmer. I'm not officially a programmer or coder unless I can get the computer to work correctly.

And so as I prepare to learn another computer language here on the blog, I want to make sure that this doesn't happen again. I want to learn a language for real -- not just read a text and answer questions, but write code that actually works on the computer. There's no playing around here -- I must master coding well enough to make it a viable job alternative if the coronavirus ultimately ruins my career in the classroom.

This is what I was hoping to do when I purchased that Visual Basic text in the fall. Visual Basic is similar to BASIC, but it removes some of its defects. Based on what I see when I glance through the text, Visual Basic has variables that are local to its subroutines. There are no line numbers, so that unlike original BASIC, Visual Basic isn't reliant on "spaghetti code" with lots of GOTO statements that jump around the program.

And I'd be able to write real Visual Basic code since the book came with a CD that contains software for V. Basic. No longer would my learning be merely theoretical -- it would be real hands-on coding.

But of course, you know what happened -- the CD got stuck in my drive. I can neither run the CD nor remove it from my drive. And so the whole Visual Basic project became a bust. Of course, this means that any computer language that I learn now must be theoretical, which is better than nothing (though I'd still prefer actual software that I can code). But it also means that I'm no longer tied to Visual Basic -- I could choose a better language instead.

I know that Visual Basic is superior to spaghetti BASIC -- the latter died in the 1980's as soon as Visual Basic replaced it in the 1990's. But is Visual Basic a current, vibrant language? The last version, VB6, came out in 2008 -- there hasn't been another sense. Thus it's likely that Visual Basic is now just as dead as spaghetti BASIC is.

We know that Pascal is also past its prime, since C++ replaced it on the AP Comp Sci test (which is why I was able to get that Pascal text for free). But as it turns out, C++ itself only lasted a few years as the official AP computer language. Since right around the time I left UCLA, the language of choice for the AP is -- Java. (Indeed, both Pascal and Java have had much longer reigns as the official AP language than C++ ever did.)

Like C++, Java is an object-oriented language. There's a Java link at the same website as the C++ link from above -- Laura Lemay is listed as the primary author here:

http://101.lv/learn/Java/index.htm

And the latest version of Java came out just last month, Java 14. Between this and the fact that the AP considers it to be a language worth learning, I have made my choice. I will learn to code in Java, and I'll post my progress here on the blog.

Still, I wish I had some Java software to write actual code in. I miss the days when I could just turn on the computer and voila -- BASIC was right there, with no need to purchase or download any additional software. But those days are long gone. If I want to code in Java, I'll need to spend money to get it. Since I just received a stimulus check, it might be worth it to purchase and download.

But are there ways to access Java without spending lots of money? After all, if it's the language for the AP test, then how are high school students -- many of whom likely used on-campus computers as their only way to get to Java before the coronavirus outbreak -- accessing the language?

I found the following Twitter link:

https://twitter.com/hashtag/apcsa?lang=en

But unfortunately, there isn't much there on access to Java for people without it at home. (I do see a lot of Female Diversity Awards there -- it's good that more girls are considering programming as a career now.) I wouldn't be surprised if AP Comp Sci students are given passwords to Google Classroom or some other link to allow them to use Java during the coronavirus -- that's good for them, but it doesn't help people like me at all.

Let me write Hofstadter's BlooP functions in C++ now. Perhaps after I learn Java, I might try writing them in that language as well. Then again, these BlooP programs don't require object-oriented coding as found in C++ or Java.

double twotothethreetothe(int n)
{
     int cell0 = 1;
     for (int i=1; i<=n; i++)
          cell0 *= 3;
     double cell1 = 1.0;
     for (i=1; i<=cell0; i++)
          cell1 *= 2.0;
     return cell1;
}

By using double here, we can go up to n equals 6, just as we did in Logo.

unsigned int minus(int m, int n)
{
     if (m<n)
          return 0;
     for (int i=0; i<=m; i++)
          if (i+n==m)
               return i;
}

Here we used unsigned int to emphasize that this minus function doesn't return negative values (as a real subtraction function would). All variables in BlooP are unsigned, and so the issue of sign never comes up. (Actually, I read that no variables in Java are unsigned, so I'd better get unsigned out of my mind before starting Java.)

bool isprime(int n)
{
     if (n<2)
          return false;
     for (int i=2; i<n; i++)
          if (n%i==0)
               return false;
     return true;
}

By now most of this should be self-explanatory. This function is bool (Boolean), the for loop can be executed 0 times so 2 isn't a special case, and % is the primitive remainder operator.

And finally, here's Goldbach:

bool isgoldbach(int n)
{
     for(int i=2; i<n; i++)
          if (isprime(i) && isprime(minus(n,i)))
               return true;
     return false;
}

Here the && operator denotes "and." Notice how much simpler our Goldbach function is in C++ than it was in TI-BASIC. I hope that Java will be as easy for me to figure out when I start it next week.

A Few Math Problems

I'm burying the Rapoport problem here, so clearly it's not a Geometry problem. Indeed, today's problem is Calculus. (So I guess today is AP day, as we jump from AP Comp Sci to AP Calc).

Today on her Daily Epsilon of Math 2020, Rebecca Rapoport writes:

integral _1 ^5 (x^3 -3x^2 - 3/2)dx

This is a straightforward integration problem. If you don't recall integration, then you can refer back to my Michael Starbird posts, particularly those on integration (Lessons 12-14, January 22nd-24th).

We begin by finding the antiderivative:

(1/4)x^4 - x^3 - (3/2)x

and then we plug in the limits of integration (5 and 1) and subtract:

(1/4)(5^4 - 1^4) - (5^3 - 1^3) - (3/2)(5 - 1)
= (1/4)(624) - 124 - 6
= 156 - 130
= 26

Therefore the value of the integral is 26 -- and of course, today's date is the 26th.

There was one Geometry problem on the Rapoport calendar recently:

Lines AB and CD are parallel. Find x.

[The two parallel lines are cut by a transversal, but no other points besides A, B, C, D are labeled. I will only label the eight angles -- the two lines are horizontal, and reading out the angles from left to right and top to bottom are 1, 2, 3, 4, 5, 6, 7, 8. Then Angle 2 = 155 and Angle 5 = x.]

OK, so this is easy. By the Corresponding Angles Consequence, Angle 6 = Angle 2 = 155. Then, since Angles 5 and 6 form a Linear Pair, x + 155 = 180, and so x = 25. Therefore the desired angle is 25 degrees -- and of course, yesterday's date was the 25th.

The day before yesterday was Arbor Day (or Tree Day). Arbor Day is the last Friday in April, and so this year it fell on its earliest possible date, April 24th. Thus this is the closest to Earth Day (a similar holiday) that Arbor Day can get.

(Also, notice that if Easter is on its latest possible date, April 25th, then Arbor Day is also on its latest possible date, April 30th. Thus Arbor Day can never be before Easter -- the earliest it can fall relative to the holiday is Easter Friday/Friday of Bright Week. Also, notice that Arbor Day is always one month and one day before Memorial Day. Thus Arbor Day, like Memorial Day, can keep its current definition on the Usher Calendar.)

The reason I'm writing so much about Arbor Day is that Rapoport -- most likely intentionally -- put a tree problem on her calendar that day. Of course, these aren't botanical trees, but the trees that one can find in graph theory. (The Konigsberg Bridge problem of Lesson 1-4 is also on graph theory.)

I won't state the problem here because I don't wish to draw or describe the graph here. Instead, I will mention that there is a Numberphile video on trees, including mention of the number TREE(3), which is a very big number. (Trees can also appear as objects in C++ and presumably Java.)

What About FlooP and GlooP?

According to Hofstadter, FlooP is similar to BlooP, except that there are MU-LOOPs that, unlike the For loops of most languages, can run an arbitrary number of times. He writes:

"BlooP's defining feature was the boundedness of its loops. What if we drop that requirement on loops, and invent a second language, called "FlooP" ('F' for "free")? FlooP will be identical to BlooP except in one respect: we may have loops without ceilings, as well as loops with ceilings (although the only reason one would include a ceiling when writing a loop-statement in FlooP would be for the sake of elegance)."

The author explains why we'd want to do this:

"This feature will allow us to write tests in FlooP for such properties as wondrousness [that is, a number that satisfies the Collatz conjecture] and the Tortoise property [a number that is the difference of two primes] -- tests which we did not know how to program in BlooP because of the potential open-endedness of the searches involved."

Keep in mind the original reason that Hofstadter writes about this -- he's ultimately trying to tie FlooP back to Godel and his incompleteness theorem:

"So each FlooP program now gets a very long Godel number. For instance, the shortest BlooP function (which is also a terminating FlooP program) --"

OK, there's no need for me to start writing out all the Godel numbers.

These new loops are easily simulated in other languages as While loops with no conditions -- such as while(true) -- which don't halt until return is reached inside the loop:

As for GlooP, this should be one step beyond FlooP:

"Yes, but what is GlooP? If FlooP is BlooP unchained, then GlooP must be FlooP unchained. But how can you take the chains off twice? How do you make a language whose power transcends that of FlooP?"

But according to Hofstadter, there can be no improvement over FlooP -- that's as far as we can go:

"It is called the Church-Turing Thesis. If we accept the CT-Thesis, we have to conclude that "GlooP" is a myth -- there are no restrictions to remove on FlooP, no ways to increase its power by 'unshackling' it, as we did BlooP."

And just as "primitive recursive" means "BlooP-computable," the author explains:

"Now FlooP-computable functions can be divided into two realms (1) those which are computable by terminating FlooP programs: these are said to be general recursive; and (2) those which are computable only by nonterminating FlooP programs: these are said to be partial recursive."

Hofstadter concludes the chapter by tying this back to TNT, or Typological Number Theory, which can handle both primitive and general recursive functions and predicates:

"If TNT could not represent primitive of general recursive predicates, then it would be incomplete in an uninteresting way -- so we might as well assume it can, and then show that it is incomplete in an interesting way."

Conclusion

Had my Visual Basic CD not gotten stuck in my drive, my original plan was to get through the early lessons quickly and slow down for the later, more complicated lessons. I was going to spend one post on Lesson 1, two posts on Lesson 2, three posts on Lesson 3, and so on. If we started this right after Presidents' Day (which was my plan), then we would have completed Lesson 6 on Pi Day.

The Java link above has 28 lessons. If we spend one post per lesson, then at my current (summer) posting rate, we should finish right around the first day of school. But I want to go slower to make sure that I'm actually learning. Then again, spending n posts on Lesson n is too slow when I'm not posting every weekday -- we'd probably only get through Lesson 7 by the end of summer.

So instead, I'll spend one post on Lesson 1 (which is just an introduction), and then two posts for every lesson from that point on. This should take us about midway through the book before the first day of school.

And so our first Java lesson will be in my next post.