Skip to content

Birth of a New Amiga Emulator

Do not take the title serious. As long as there is no release, this is strictly Vaporware(TM). Or something like that…. However, with the release of Mahatma68k, the likeliness of such a release has become somewhat more probable.

When I started the project, which has the current working title “I Heart Amiga”, it actually started as an attempt to make WinUAE’s features available in Unix versions. I have to admit it, I do not like C when it comes to writing a virtual machine. The abstraction level is simply too low and I like to look at a virtual machine as a system of components (a job for an object-oriented language). There is no sense for me to write a lot of extra code that has nothing to do with the problem. You can get it done in pure C, but then you could also use Notepad to create whole web sites.

I then had this crazy idea that the JVM might actually be fast enough nowadays to at least reach the performance of an Amiga 500 (something that has yet to be proven) and so I just kind of started hacking together some code in Scala. The nice thing about doing side projects is that you can just choose what you think suits the problem and I wanted a static JVM language (for performance), which allows me to avoid writing tons of boilerplate and “syntactical noise”. Scala seemed to be much more difficult than Java at first, but when I saw that the “difficult parts” were actually in the library and not part of the language, I realized that it is actually simpler. I like functional languages, but after having done the Z-machine in Erlang, using a object-oriented language to implement a VM feels more natural to me.

The emulator is still in its early stages and I do not know how fast I will progress. However, I wanted to share some details of my current design, that might be useful to anyone who is trying to implement an Amiga.

The Amiga is the most fascinating computer system ever created. That’s why I am trying to implement it in software. You might think differently, but ask yourself: can you think of any other system that was years (released in 1985) ahead of its competition and yet those competitors slowly incorporated all of its features and eventually surpassed it in the nineties. I have a feeling that if the Amiga would have been an Apple brand, it would have probably been much more successful, but then, the competition would also have tried harder to catch up quicker. The way the Amiga was marketed, it did not really pose a threat to the established competition. That’s sad, but there are lessons to learn from that. We see everyday that technological excellence is not everything, and great marketing and business development can make up for not having the best product.

I digressed. I actually wanted to show the current system design of my emulator. Here is an UML diagram of it:

DMPP System Design

I like to keep it simple and my approach for this project is to only add new components when it seems to be necessary. On ZMPP I admittedly got a little overboard and created more classes than I actually wanted to, which has to do with the TDD approach I took and especially my inexperience with TDD when I started ZMPP. This time, I did not use TDD (I did use a couple of unit tests to enhance my confidence on some tricky parts, though), but entirely focussed on understanding the problem and keeping it simple.

As you can see, this is currently a CPU and an address space which maps to the various system components.The Amiga uses memory-mapped I/O, and one of the most important concepts I had to learn as a non-Hardware person was “incomplete address decoding”. As I have documented in the diagram, some system components respond outside of the documented addresses (e.g. Chip Memory if you have less than 1 MB and the Custom Chips). You could learn this by looking at UAE’s memory map, but I actually found out by looking at a disassembled Amiga Exec. Finding the disassembled (and commented !) Exec source code was almost like having found the Rosetta Stone for me and I am grateful to Markus Wandel (I noticed that a lot of Amiga fans seem to be german !) for doing this hard work. So far, this has been one of my most important tools (besides the Hardware Reference Manual). I just kind of progressed by stepping through the ROM listing. I currently reached a point where I am outside of the commented part (probably dos.library and graphics.library). Getting this far gives me some confidence that my CPU emulation seems to work pretty well and that my memory mappings are working.

Some time ago, I purchased the Cloanto DVD set: “Amiga Forever”. Looking at the videos, I am fascinated by the passion, enthusiasm and the incredible skill that the original Amiga team had and I am inspired by that. Reading through the Amiga OS assembly code is an exciting experience as well. This is the beginning of a wonderful journey into computing history, and into the internals of a computer system that is unlike any other that has come before or after it.

Emulating the Motorola 68000 on the JVM

I am currently trying to write an emulator for the Amiga, which is powered by a Motorola 68000 CPU. I initially started in C++, but switched to the JVM, for various reasons:

  • development on the JVM is much faster than in C++ and the performance penalty might be acceptable here compared to other non-native platforms
  • the license of the CPU emulation library I used was not flexible enough for me

I actually expected that I would find a suitable M68K emulation library for Java, but there seemed to be nothing that really fit my requirements. Because there is at least one person with a strong need for such a library (me), I decided to create one myself with the main goals:

  • very liberal license (BSD)
  • fast and simple to use
  • easy to maintain

I have created that project on Sourceforge, and did my first checkin today.

There are no released files and the source is available in a git repository. It is basically a generator written in Ruby with a database containing information about the instructions (decoding, timing, output, execution), which is mostly defined as Ruby hashes. This approach is inspired by the emulation libraries as they are found in UAE or Musashi. The advantage is that the step for decoding the instruction is replaced by a simple lookup in an array, so that the CPU only has to evaluate the parameters and execute the instruction.

Execution times (which are necessary for my emulation) are also mostly pre-calulated: Because addressing modes are available at generation time, the generator can simply calculate the number from the timing database and put it in the generated source code. There are only a couple of exceptions, which are related to conditional execution (e.g. Bcc, Dbcc, privileged instructions…)

Another nice feature is that the decoded instruction objects “know” how to print themselves, which can be used to disassemble sections of code. This feature is optional, because it makes the code much larger.

With a generator approach I also can reduce one of my main pain points with the Java language: boilerplate. Writing tons of structurally similar code would tire me out and I’d be exhausted (and bored to death) before I actually write my Amiga emulator.

Currently the emulation can emulate 58 of the 76 instructions and I am adding a couple each day. There is some exception handling and support for traps and supervisor mode, and interrupts will be added shortly (because I need them).

I love the 68000 CPU, I really do. I am not a hardware expert, but I think, when it was released (1979), this processor must have been quite revolutionary. I just love how easy it was to program it (assembly code that did not make you go “cuckoo”), with a flat 24 bit address space and working in 32 bit internally. It’s nice that this project gives me an opportunity to study its design and history.

It is my hope that other people find this emulator useful in some way, I have tried to provide some help to make it easy to integrate in projects (maven build, example program, simple public interface).

Snow Leopard and Java Compatibility Issues

Contrary to my regular habits, I bought the latest update to Mac OS X, “Snow Leopard” on the first day of release. Our local Apple store opened early (I think around 8-ish) on the 28th and I picked up my copy as soon as I saw the doors opened.

The truth is, since the days of the Amiga, I have not been as excited for an operating system release. My confidence in the quality that comes out Apple is also almost limitless, so I just popped the DVD into my computer and waited about an hour for the install to finish, hoping that there would be 7 GB magically reclaimed from my hard disk. I was wrong – it turned out to be a whopping 14 GB, which I thought was pretty impressive.

The next thing was for me to look for apparent problems, but apart from my “A Lot of Water” screen saver stopping to work (some changes to OpenGL in Snow Leopard might have caused this *sniff*), I did not notice any problems, just the “it just works” that I am used to from Apple. Well – until I started to do some development in Java. The Snow Leopard installation seems to remove  an installation of Java SE 5 and instead link the Java SE 6 version to it.

So what’s the problem, Java SE 6 is compatible to Java SE 5, some would say. Yes, almost. Unfortunately there are some incompatible changes in JDBC (in my case, the Connection interface) that requires me to work on a Java SE 5.

Luckily someone had the same problem and has published a solution for that problem: The bottom line is to download a Leopard version of Java 5 and redirect the symbolic links to that downloaded version.

After following the instructions, I could continue working on my Java project as before.

Update: I had another issue with maven even though I had already changed the version in Java Preferences. This could be fixed by adding a line

JAVA_VERSION=1.5

to the  ~/.mavenrc file.

Setting up maven for Scala and ProGuard

I have started to use Scala occasionally since last year, it is less scary than it initially looked to me and thanks to good tool support (specs comes to my mind), it is even a joy to program in Scala.

What I initially found irritating though was the fact that the Scala library (despite running on a platform with one of largest standard libraries in existence) is huge – almost 3 megabytes in size – reminds me of my first “Hello world” written in Eiffel 15 years ago (that was about 4 MB). For applications to be run on the server one could simply say: who cares ? Still, I do write quite a bit of applications that are distributed over the Internet, be it applets, Webstart or Android applications and I strongly believe Scala is good for the server as well as the client side. For downloadable client applications, actually size does matter (at least to me).

Fortunately, there are tools that can strip out unused code from class files and further optimize the size of the generated jar files. ProGuard, which is Open Source, is such a tool and it even works for Android. Since I plan to replace Java with Scala as my main static JVM language (haven’t settled on the dynamic one, but so far, it looks good for Clojure), I fiddled a bit with a maven POM, that I can use as a template for my projects and thought I might share the relevant pieces that took me a bit of experimentation to get working. Maven 2 has now become my standard Java build tool, simply because I can generalize my builds a lot when working with JVM languages and they integrate well into my Hudson/git/jira/Eclipse environment.

I like my Java applications to be started with a simple java -jar , so I configured the maven-assembly-plugin to set the main class into the manifest file and linked it to the package phase in order to avoid that the assembly plugin would add my class files to the jar file twice (ProGuard complains about this) – therefore the assembly:single phase is invoked.


...

  
    com.pyx4me
    proguard-maven-plugin
    
      
        package
        proguard
      
    
    
      false
      ${project.build.finalName}-jar-with-dependencies.jar
      ${project.build.finalName}-small.jar
      ${basedir}/proguard.conf
    
  
...

ProGuard is linked to the package phase as well and runs after the assembly goal. is set to false to avoid warnings about duplicates, because we have already pulled the dependencies into the jar file in the assembly goal:


...

  
    com.pyx4me
    proguard-maven-plugin
    
      
        package
        proguard
      
    
    
      false
      ${project.build.finalName}-jar-with-dependencies.jar
      ${project.build.finalName}-small.jar
      ${basedir}/proguard.conf
    
  
...

My proguard.conf looks like this, note that this is a setup, for Mac OS X, where the JDK’s jar files are at a different location than on Windows, Solaris and Linux (they should be in $JAVA_HOME/lib/rt.jar:$JAVA_HOME/lib/jsse.jar):


-dontwarn
-dontskipnonpubliclibraryclasses
-dontskipnonpubliclibraryclassmembers
-libraryjars /System/Library/Frameworks/JavaVM.framework/Classes/classes.jar:/System/Library/Frameworks/JavaVM.framework/Classes/jsse.jar
-keep class com.boxofrats.App {
  public static void main(java.lang.String[]);
}

When building application with mvn package, there is a significant reduction in size of the resulting jar file, which is exactly what I wanted. “Hello world” is about 3-4 K, but that’s of course not representative. The bottom line is: we do not have to carry around 3 megs of dead baggage, we just take what we need. Being able to create smaller applications makes development in Scala much more reasonable for mobile platforms.

Migrating ZMPP to git

I did it. Finally I moved from Subversion to git. Until today, ZMPP was the
only larger project that was still in a Subversion repository.
To be fair, in the two years I used it for my projects, I have never really experienced
larger problems. It felt exactly like it was intended to be: A better, more modern CVS,
and it has great tool support as well, something where git could still improve.
Still, after using git in parallel to svn and cvs for about a year now, nothing really
beats the comfort working with a DVCS.

I did it. Finally I moved from Subversion to git. Until today, ZMPP was the only larger project maintained by me that was still in a Subversion repository. To be fair, in the two years I used it for my projects, I have never really experienced larger problems. It felt exactly like it was intended to be: A better, more modern CVS, and it has great tool support as well, something where git could still improve.

Still, after using git in parallel to svn and cvs for about a year now, nothing really beats the joy working with a DVCS, even when used solely through the command line. Personally, the way git handles branching and merging is perfect for me: it really invites one to make wild experiments. Another nice thing is that due to its decentralized nature, it is much easier for me to add ZMPP to my Hudson setup, work on my local copy and just push the changes to Sourceforge when I think they are good enough.

I did it. Finally I moved from Subversion to git. Until today, ZMPP was theonly larger project that was still in a Subversion repository.To be fair, in the two years I used it for my projects, I have never really experiencedlarger problems. It felt exactly like it was intended to be: A better, more modern CVS,and it has great tool support as well, something where git could still improve.Still, after using git in parallel to svn and cvs for about a year now, nothing reallybeats the comfort working with a DVCS.

A Template for Building Scheme Applications for iPhone/Mac OS X in Xcode

I have often played with the idea of writing Mac and iPhone applications in Scheme, but as a result of not knowing the right compiler to embed (and lack of experience in Scheme), I could not execute on it. While programming in Objective-C is definitely doable (and of course you can use Objective-C++), I like to have more options for native application development.

Luckily, this month, there was one blog entry about Gambit-C on the iPhone which was a great starting point (and James has since put out more interesting posts on the topic). In addition to the fantastic advice found there, I wanted a better integration within Xcode, since for now, it seems impossible to develop (legally) for the iPhone outside of the IDE (personally, I prefer setting up builds based on cmake and make them as cross-platform as possible). In particular, I wanted a build phase to compile from Scheme to C without having to invoke the Scheme compiler manually. I have created a template for this purpose for later use and have put it on github in case other people might be interested to use it. The source is released under the new BSD license, so feel free to experiment !

The principle used is similar to the one described in James blog – all Scheme modules defined in the project are compiled to C files and linked into a single executable. When using the Gambit-C compiler, one has to pay attention to the code that it generates: when linking statically, we need to:

  • provide the -link switch to the compiler command
  • include all the Scheme source files we want to link into the executable

If we would compile each file separately with the link option there would be a flat link file for each Scheme module and we would get linker errors due to duplicate symbols. Calling gsc with all the needed Scheme modules provides it with the necessary information to generate a single flat link file. The template assumes that all the Scheme files are in the scheme sub directory. In order to invoke the compiler with the correct parameters, I added an external target (a small Ruby build script) that scans the scheme directory and feeds it to the gsc command and it also includes a clean action that removes the generated C files.

To use the template in a different environment, the user simply needs to set the variableGAMBITC_BASEPATH, which could be as simple as /usr/local. This can be set either in the .MacOSX/environment.plist or the bash .profile script in the home directory. After a change to these, the user needs to log out and log in again so Xcode can recognize the change to the user settings. I personally prefer to install development tools somewhere in my home directory, which has the advantage that I can use different configurations and don’t pollute my system directories.

The contents of my personal GAMBITC_BASEPATH look like this:

$(GAMBITC_BASEPATH)/Debug
$(GAMBITC_BASEPATH)/Release
$(GAMBITC_BASEPATH)/Debug-iphonesim
$(GAMBITC_BASEPATH)/Release-iphonesim
$(GAMBITC_BASEPATH)/Debug-iphoneos
$(GAMBITC_BASEPATH)/Release-iphoneos

These are the directories that are the parameters to the –prefix option in the configure call when compiling gambit-c (see James’ blog). In fact, I simply built a version for each platform (Mac OS X, iPhone Simulator and iPhone Device) and symlink’ed the directories to reflect the layout above, since I do not need separate Debug and Release versions for Gambit-C.

In the build rules for the main target, I added a dummy rule which specifies the output files of the “compile scheme” target. This is the only part that I find a little ugly, because whenever the list of Scheme modules changes, this list needs to be changed, so the C compiler can include these files in the builds. I still find this solution better than checking in generated files into version control or including C files in the project tree which do not exist.

The github repository contains templates for a Mac command line tool and one for an OpenGL ES application on the iPhone. Since I am a game developer, the OpenGL ES template is most useful to me, but it is pretty easy to adapt the setup for other kinds of iPhone (or Mac OS X) applications. The initialization point in my template differs from the one described in James Long’s blog: the main.mm is unmodified, instead, the Scheme system is initialized when the view is initialized, because that is the place where I currently pull up my engine.

Happy Hacking 🙂

Porting ZMPP to Android In One Day

Being sick at home has its advantages and drawbacks. On one hand you do not really get to see to many people (which is for their own good) and suddenly you feel pretty lonely. One the other hand you suddenly get time for stuff that you always wanted to do. One of those things for me was to try writing an Android frontend for ZMPP. The modest goal was being able to run Trinity, Curses and Minizork on Android. I reserved myself a total of one day for that: half a day for trying out how to implement the screen model with the Android user interface API and half a day for implementing the rudimentary user interface (with some generous amount of sleep in between).

I had already changed the ZMPP core to compile on Android last year (thanks to Sandy McArthur who looked into the code and pointed out the parts that were incompatible) and pushed a lot of the screen model handling into the core, so what was really to do was the plumbing into the Android interface. Luckily, Android’s user interface library is pretty flexible, so porting from Swing to Android Views seems relatively simple. Well guess what, I luckily reached my goal within the time budget:

This is Trinity, one of Infocom’s greatest classics (and out of my “Masterpieces of Infocom” collection that I purchased for a fortune at the beginning of the Z-machine Preservation Project). While these classics are more than 20 years old, in my opinion good Interactive Fiction is timeless like good literature.

I actually could run Curses and a good number of Version 3 games (Zork I-III, Minizork…) as well, but I guess that there are still quite a bit of issues, given the short time I had for implementing it. One huge issue I instantly noticed here is the performance: For Curses, response times are pretty horrible, for Infocom games and Minizork it is pretty ok. I can only imagine that a modern piece of Interactive Fiction written with Inform 7 would take minutes between each turn. Infocom Z-code is tighter than Inform code because the computers in the 80’s used to be much more limited than our cell phones today and they were written with a compiler which seemingly produced pretty good Z-code. ZMPP is a “VM-in-a-VM” approach, still I am a little surprised of the slowness on Android’s Dalvik VM compared to Java SE.

Well, you don’t know what you don’t measure. I guess that calls for a profiling session – some time…

Bitmap Font Machine – Box of Rats’ First Product

As previously mentioned, I have lately been pushing hard to get Bitmap Font Machine released. This is the first product by Box of Rats, which owes its existence to my inability finding a suitable bitmap font generator to support me in my current game development.

Bitmap fonts are fonts that are, as opposed to vector based fonts such as TrueType, stored in a bitmap. This has the disadvantage that scaling such fonts can result in quality loss, but when used in games, especially those based on 3D technology, they are pretty useful. Each glyph of a bitmap font can be texture mapped on a rectangle by specifying the proper coordinates within that bitmap. On systems with 3D display hardware (nowadays this means most systems), this is significantly faster than rendering a vector font. Another advantage is that the game does not need to be shipped with that that font in case a custom font is used, nor does the target system need to be able to display the original font format.

Now the straightforward way would be to create that font in Photoshop (or in my case, GIMP) and I did that for the first three times or so. When I write code, I get tired and bored after doing the same thing for the third time (and will start to look for a better way). Another thing was that this procedure was only practical for fixed width fonts like Courier, where I could simply derive the glyph positions from the equidistant grid, so I could not use proportional fonts.

As a result of this, I started a research on bitmap font generators and quickly found quite a number of tools for the purpose, but none of those quite fit was I was looking for. Practically every one was written for the same operating system. Redmond is around the corner, but I am nowadays mainly a Mac and Ubuntu User. I also wanted a very simple user interface where I could just hit a few buttons and immediately get a good feeling about how text would look like in my game.

After coming to the conclusion that I would not find the tool I was looking for, I decided to write it myself. I also decided to write it in Java, because I knew that it would allow me to control font rendering and writing on client machines easily and in a platform-independent way. I also thought that I should make the tool available for other game developers as well and that’s how Bitmap Font Machine came alive. I have retargeted it to run as a browser applet, so I did not need to worry about distribution  and provided some sample code (which is currently more for demonstration than for production purposes) to show how to use the bitmap fonts on Mac OS X or SDL/OpenGL based systems.

So far, this tool has been very useful to myself, and it allows me to try out different fonts much easier and quicker. I hope that it can be useful to other people as well. I plan to improve Bitmap Font Machine over time as I progress further with my game development projects, and additional suggestions are definitely welcome.

My First Business: Box of Rats

I haven’t written any blog entries in a while. The main reason is that I have been busy forming my first business. Box of Rats is a game development studio, something I have been dreaming to do for a very long time – I was in primary school, when I drafted my first plans.

Admittedly my plans look more sophisticated 24 years later, but that’s probably not a bad thing.

I have now opened my business website and will hopefully release it’s first product (which is not a game !) in a few days.

A New ZMPP Design

The parts for a new release of ZMPP are slowly coming together. Pichuneke, a spanish ZMPP user, contributed a spanish translation for the user interface, which is now the second user initiated translation (the first being French, done by Eric Forgeot).

This is a point that makes an Open Source project fun: the involvement of its users. At some point I have realized that ZMPP has turned from being my personal pet project to being software that I believe now belongs to its user community which has exceeded my initial goal (being a reference implementation of the Z-machine in Java) by far.

I also realized that “Z-machine Preservation Project” to me means not the Java implementation itself, but it is more my personal idea how a Z-machine could be implemented on a variety of platforms. Having done ports in Ruby and Erlang has deepened my understanding of the general problem of implementing the Z-machine in an implementation language which is on a higher abstraction level than C/C++. The Subversion trunk now contains an extract of the lessons learned mainly during the implementation of the Erlang port (and I am still amazed how well some things can be done in Erlang). The illustration below shows the updated design as it currently exists in both the Erlang and Java versions:

ZMPP Reference Design

As can be seen, an ExecutionControl object controls decoding and execution of instructions, which in turn work on the Machine object’s and screen model’s state. The Z-machine core now runs single threaded only, which simplified the control logic quite a bit, especially the implementation of interrupts, which are now controlled in the user interface. Also, there are now much less core objects the user interface needs to deal with, which in combination with the pause/resume execution facilitates the integration in different contexts.

The most apparent difference in the new ZMPP is the new screen model implementation, which finally uses a JTextPane as it should have been from the beginning. It should be noted that Zinc has taken that route years ago. The way it is implemented is totally different from what I wanted it to be, which is why I chose custom rendering in the first place.

This time around, I took more time to analyze the Z-machine screen model more thoroughly, which led to the decision of implementing the screen model view in a way I think it should be implemented:
This was now done by having two Swing components to represent the upper (a fixed text grid) and the lower (a flexible text area) window. As opposed to Zinc, but like in Zoom, I wanted to have a scroll bar, which spans both the upper and lower window, but only controls the lower window.

This is not easily done using the standard AWT layout managers, so ZMPP implements its own which manages the components in the following way: The main view is a JLayeredPane and the layout manager puts the bottom window, which always spans the whole layout area, below the top window. The upper window is used as a flexible and transparent overlay over the lower window so its output can be rendered overlappingly with the lower window, and its size can be controlled by split commands.

Overall, I happily trashed a good portion of the original ZMPP code, which I credit to a better understanding of the Z-machine. Switching to JMock 2.5 (which is a huge improvement over JMock 1.x) also helped greatly in simplifying the test classes.

Why was this all necessary ? As a short-term goal, I want to deliver on some promises I made: A screen model supporting selection, cut and paste, more reader-friendly margins and resizable windows for example.

Medium-term goal is V6 support, which is still incomplete and currently deactivated for Version 1.5. It was one of the main drivers for the new design, which will hopefully make it easier to overcome some of the problems how the old ZMPP handles V6 games. V6 support is one of the things which I really feel ZMPP should do well, to deliver on the “Preservation” in ZMPP’s name.

Ultimately, the changes made were done in order to provide a baseline for future improvements. I always want to try my best to support this goal. Change is the only constant in our world and as Extreme Programmers (I do not consider myself one) say: “Embrace Change”.

To ZMPP’s users: Thank you for all your support, your suggestions, improvements and criticism. Without you, the changes made and which are going to be made in the future would not have been possible.