Time continues to be in short supply as the demands of life keep me busy most of the week. Much has been accomplished, although no major animations have been created.
A new artistic direction has been discovered (see "The Muhtoombah Convergence," below)...
... and once again the news is a month behind ...
The first trial run of CanyonDeep is complete and available for download!
After all the improvements to the software to facilitate this were done, the actual rendering time was about 362 hours (15 days) to make a 600-frame, 160x120 pixel video. This area is extremely difficult to calculate because it is so close to the edge of the set, and count numbers go way up as you go down into this little crevice. Some frames required over 200 million iterations per pixel!!
The little trial run is discouraging -- a full production video would be 640x480 pixels, which is 16 times as many pixels per frame as 160x120. That means it would take about 35 weeks to render, assuming no additional frames need to be done (because of frame interpolation, that gets hard to predict). I'm not sure I'm willing to dedicate my computer exclusively to this for that long. We'll see. I may try to find a slightly different location that looks similar but is a bit easier to render.
The Muhtoombah Convergence
All the work to get the software ready for CanyonDeep and to get the adaptive animation coloring (see below) working over the past two months got me kind of sick of all this and wishing for a change of scenery. I had been poking around with some of the basic convergent-type fractals like Newton and Nova for a while but didn't really put much effort into them because they seemed kind of boring.
And then I discovered the Muhtoombah video that is part of the Biocursion DVD, and I suddenly had a new objective: I knew I could do something similar, but better.
Muhtoombah is a convergent fractal. This is a class of fractals different enough from the divergent fractals like the Mandelbrot set that my software needed some major modifications to be able to render them. I have started the necessary improvements, and I've got most of it working pretty well.
This group of fractals makes some of the most extraordinarily beautiful images you can imagine, and the look is completely different than the Mandelbrot sets based on divergent polynomials like z2+c. There are also mixed convergent-divergent fractals... There is much artistic potential here.
I don't have any animations ready for publication yet (a couple of simple ones are close but not quite ready this weekend) but I did create a page of some representative still images. This includes some of the classic types (Newton, Nova) and a few that I created accidentally by software bugs, but which turned out to make really nice images. There's also one incredible formula I've re-discovered, and some of the images it creates just about bring tears to my eyes.
Adaptive Animation Coloring
Coloring deep-zoom animations is a real problem. The first few I did were easy, but as I get into more complicated areas of the set and zoom deeper, the range of the fractal data varies so much that it becomes very hard to convert the raw data to colors in a nice-looking manner.
Previously I had tried some dynamic adaptive schemes that expanded or compressed the color map depending on the fractal data, as well as applying some smoothing to get rid of the glitches caused by the random fluctuations in the fractal count values. This worked well for some animations, but for others (especially Canyon2), the effect was kind of disturbing visually. The problem has been that the colors end up changing as the animation progresses, and that looks weird, annoying, or even kind of nauseating.
I have been thinking hard about this and finally found an approach that seems to be working fairly well. It is based on techniques from high dynamic range photography. The result is a single static color map that applies to the entire animation and does not change. It gives static colors that are always reasonably close to the optimal color map for each frame. At this point, the method still needs some tweaking and has a few glitches, but I'm pretty confident I'm on the right track with this.
The CanyonDeep trial run uses this method.
These things are a little different than the divergent fractals. I've gone into some detail on a new technical page on the web site.
The software I use to make my images is completely my own creation. As such, it is horrendously buggy. December saw a major bug-slaying campaign over the Christmas break, and most of the worst ones (random crashes that cause the loss of large amounts of work) are done. The program is almost something a casual user could work with. And I'm not so afraid of losing work anymore.