Color coding for code
One project with potential Newton use is to add a simple serial port driver over USB to the Olimexino. This is mostly to understand how such a driver works in general, and if there is a possible way to add something similar to the Newton. There are of course integrated USB to serial chips already available, and solutions for the Newton, but maybe there are some other profiles which could be implemented.
The driver is written in Forth (for quick results). During development, logging is of course a useful feature. But there's a way not to do it, and that is to log in the interrupt handler code :). Maybe some color coding of code which is called in an interrrupt handler would be useful!
Rest in Peace, Hardy Marcia
With great sadness I read that Hardy Marcia, a long time Newton pioneer has passed away on May 13. His software for the Newton was very early on setting examples for functionality and usability, from which I learned a lot. It feels unfair that an obviously very great guy had to leave us early. Farewell, you will be missed.
FORTH OLIMEXINO-STM32 !
Another interesting development board is the Olimexino STM32. It is compatible to the LeafLabs Maple, using a Wiring based library and simple IDE (very similar to the Arduino boards). I couldn't resist and had to bootstrap CoreForth on this board as well :)
Granted, this was a bit easier since I have CoreForth already running on another close relative (the STM32-P103, also from Olimex), so it just took a couple of hours to get the basic interpreter and compiler loop running. On the ARM SoCs it really just comes down to initializing the clocks, the GPIOs and the UART - afterwards, it's all native Forth!
Regarding the Arduino Due, I got the interface to the embedded flash controller working, and can now enjoy a very retro block editor with the ability to preserve all work over reboots. It's almost a full blown development system, just an ARM assembler for Forth is missing.
After changing my job last year, I have gotten much closer to hardware work, which definitely suits me. What started with CoreForth expanded into all sorts of experiments with SoCs and development boards. Initially, it was a bit difficult to settle on one development platform, but now that the Arduino Due is out, I couldn't resist and order one, to see if I can get CoreForth running on it! The chip itself seems very capable, lots of flash and RAM, and the Arduino form factor should make it easy to experiment.
Update: Seems it was easier than I thought, after digging through the data sheet of the SAM3X8E processor on the Due, I have CoreForth up and running. I pushed the changes to a separate branch on github for now, but will merge it to the main branch once I tested this a bit more.
More Hardware Hacking
One of my early ideas for using my Cortex based development boards for more practical things was to implement a simple logic analyzer. It appeared to me that simply sampling the GPIO ports and recording changes with time stamps would be good enough for protocols like I2C or even SPI (when it's slow enough). I did want to learn the ARM CMSIS library as well, so I forked a very nice demo project for the STM32 board on GitHub, and built a simple, SUMP compatible logic analyzer. It's not perfect yet, e.g. there is no buffering at all, and I'm suspecting the timings are still off, but it was definitely fun to build! Next up is likely a CoreForth version ;)
Playing with Go
For a long time, I've been unhappy with the state of Ruby for plain standalone applications. The problem is mostly the installation of a proper Ruby environment for users of the application: For the RDCL for example Ruby 1.9 is needed, but Mac OS X ships only with version 1.8.
The main need I have for the RDCL is a simple multitasking model which allows me to implement reading and writing over a serial line to the Newton while handling other tasks. It appears that Go is able to deliver exactly that via goroutines, and I started to experiment a little with that language. It would allow delivery of ready to run binaries instead of messing with installation of an interpreter first.
Go requires a bit of a mental shift though as it is not an object oriented language in the classic sense. Usually this would not be such a big problem, but almost all of the Newton's concept are designed and implemented in an OOP way. Let's see how it goes, in the end this is at least a nice learning experience :)
Yes, I'm still using my Newton!
In case anybody is wondering, I am indeed still using my Newton :) Even though some work related changes made using it for GTD or other ways of tracking tasks less important, I found that the Newton is still the most suitable device for taking notes, journaling and keeping a master database of my contacts. The ability to have important thoughts and ideas written down in one place, and be able to search and tag is very neat. Cloud based implementations are not bad either, but with the Newton, I don't have to worry about connectivity :)
The Cortex M3 based boards I've been playing around with are quite astounding, the MCUs they use as Systems-on-a-Chip have a very nice range of peripherals which really only leaves your imagination as a limit to what can be done. Being able to interface with off-the-shelf MMC cards for example, or talking to a simple LCD module is pretty neat.
The only major drawback is that those chips usually have very little RAM, even compared to the Newton ;) - eight to 64 kBytes are not exactly a lot. When using CoreForth as my experimental platform, it is therefore interesting to play around a bit with different memory management approaches. Forth itself usually likes to live in RAM since it is more powerful when self-modifying code can be used, but there are ways around this limitation.
When it comes to multitasking though (which is relatively trivial using a Cortex M3 since it pushes most registers onto the stack when an interrupt occurs), a traditional preemptive threading model is quite wasteful what comes to RAM as each thread needs a separate stack. But the revival of event based programming through Node.js led me to check local continuations, in other words, functions with multiple entry points. In this area, protothreads look particularly promising for an implementation in Forth!
Again, something completely different
The low level work on Blunt, reverse engineering the NewtonOS and digging into ARM assembly led me to a small side project in an area which has tickled my brain since a long time: Forth. It all started with a hard to grasp book by Ekkehard Floegel which I used to browse through in the local computer shop but never bought until quite recently, and a short article on the Jupiter Ace. This led to one a very simple Forth-like system on the ZX Spectrum as one of my early programming adventures, with gimmicks such as 51x32 characters and text windows. Later on, I was in need of a programming language for a automotive test system (chances are that if you were ever driving a Ford or Renault made in Cologne during the late 1990s, the door mechanism was tested by it), and reused some ideas from Forth.
My most recent adventure into Forth-land is however the one closest to the original ideas and principles. The result is CoreForth, which is a Forth-implementation running on the ARM Cortex M3. Supported platforms are qemu and the LM3S811 board, and support for STM32 based boards such as the Olimex STM32P103 is coming along nicely. It is right now really not much more than a platform to easily experiment with such boards, but having the interactivity of Forth makes investigating and using the numerous on-chip peripherals of the Cortex chips very simple!
I also added a couple of links to other projects I have started on GitHub.
Debugging and the Scientific Method
Once a program reaches a stable state in terms of architecture, design and functionality, debugging becomes the predominant activity in the development process (unless the architecture or design were flawed to begin with). The largest part of the debugging process is to find the root cause for errors, whereas the actual fixing is usually a much simpler activity: Many bugs are fixed with one line changes, or even one character changes.
My approach to understanding why a program fails is to identify a set of hypotheses, and then test one after another. There are a couple of important points in this process: Testing can be done via a various mechanisms, such as debugging, logging, modifying input data, modifying program code, modifying timings, or changing the execution environment. Each test should produce unambiguous results, and sequential verification is important in order to allow interpretation of those results. The hypotheses need to be simple enough and non-overlapping to reduce the set of possible root causes reliably, trying to verify too much at the same time can be quite difficult.
Reducing the Debugging Work
The actual verification process (implementing and running the tests) is usually simple but time consuming, which means that I try to automate as much as possible. But the true way to reduce debugging work is to identify the right set of hypotheses, and this is unfortunately also the toughest part of debugging. It is usually a mix of domain knowledge, intuition and a systematic approach which pay off. Domain knowledge can unfortunately only be gained by actually working with the program code, it is very rare that a problem is generic enough that it can be solved without deeper understanding about the code. Intuition however is another key component altogether: It allows to make assumptions without the deepest possible knowledge, rather, deliberate ignorance helps to prevent digging in too deep (after all, understanding a system down to the metal takes a long time), and produce hypotheses e.g. based on previous experience faster. There is of course the risk that such intuition based hypotheses can lead nowhere, and I have had many many situations where I declared victory too early only. Humility is therefore a very important parter to intuition!
Why Debuggers are Bad
Formulating a hypothesis why a program is broken requires thinking and knowledge. Using a debugger usually increases the knowledge about the inner workings of a program, but it can easily stand in the way of the thinking part when coming up with ideas about error root causes. Debuggers are good to verify mini-hypotheses when stepping through code, but they are very cumbersome at getting to the big picture and testing more complex problems (plus they are worthless for testing timing issues). My impression is that they are a popular tool because they tend to give immediate results on these mini-hypotheses, which gives the illusion of making progress. For me, far more important is to read code, work with a peer, and use tracing and logging to understand the problem.
One of my favourite approaches to produce hypotheses is to take the program in the two states of working and broken, and reduce the difference in implementation between them, until the smallest difference is identified as the root cause. In cases where a bug is the result of changes due to ongoing implementation work, this is very simple, git for example allows to bisect a series of changes and mark working ones and broken ones until the offending change is found.
This approach can be extended by looking at different definitions for working and broken. As an example, working does not necessarily mean that it has to apply to the same program which is broken, but it can also mean a completely different program which however shares some design or implementation with the broken program. In relation to Blunt, I have been able to test some assumptions about the packet flow by checking Blunt 1 versus Blunt 2, even though both are internally very different.
It is important to keep a very open mind when looking for these instances of a working system, having them is the only way to provide a solid foundation for further debugging, and to keep sanity. If everything is broken, it is a pretty rocky road to make any progress. It is not impossible though, and in those cases, I usually start removing parts of the broken program until at least something works, and then work backwards by adding code.
Now, how about Blunt?
All of the above also applies to reverse engineering, which is still the biggest chunk of work to get Blunt running. Reverse engineering is in some sense simpler than debugging since the code is known to work, and understanding it is achieved by disassembly, or black box testing techniques to verify assumptions about how the code works. I noticed that persistence and thoroughness really pays off in this area, and it seems that Blunt is very near to finally work as designed :) Stay tuned for more!