After a few days of reading very, very weird disassembled code and poking registers, the odd 2D hardware finally works (for the most part). It can draw lines, so I threw in a software 3D transform. Here’s the Stanford Bunny in a glorious 448 vertices and 1416 lines of jaggy wireframe awesomeness.

The chip has hardware line styling (stippling), and you can see 4 different settings (solid, “10″ dashed, “100″ dashed, “1000″ dashed) in sequence. At the higher setting it starts to look more like a point cloud with many more points than it has real vertices.
Also of note: I’m working inside a framework that drives operation of the SPMP from the PC. While the entire bunny transformation and rendering is happening inside the SPMP, the PC sends it the rotation matrix and tells it to go each frame (and also when to switch stippling and whatnot). So it’s slower than it would be in pure standalone hardware, because there’s still at least two serial port ping-pong commands each frame (one memory download for the matrix and one command to tell it to render the bunny with it).
You can grab the (ugly as hell) code in the Git repo.
Fun stuff: the projection is orthographic, so there’s no depth information rendered. This makes the rotation ambiguous. Do you see it rotating clockwise or anticlockwise (looking at it from above)? Can you make your brain switch between them?