Tuesday, 5 August 2014

The resource leak bug of our civilization


A couple of months ago, Trixter of Hornet released a demo called "8088 Domination", which shows off real-time video and audio playback on the original 1981 IBM PC. This demo, among many others, contrasts favorably against today's wasteful use of computing resources.

When people try to explain the wastefulness of today's computing, they commonly offer something I call "tradeoff hypothesis". According to this hypothesis, the wastefulness of its software would be compensated by flexibility, reliability, maintability, and perhaps most importantly, cheap programming work. Even Trixter himself favors this explanation.

I used to believe in the tradeoff hypothesis as well. I saw demo art on extreme platforms as a careful craft that attains incredible feats while sacrificing generality and development speed. However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal. An ever-increasing portion of the waste comes from abstraction clutter that serves no purpose in final runtime code. Most of this clutter could be eliminated with more thoughtful tools and methods without any sacrifices. What we have been witnessing in computing world is nothing utilitarian but a reflection of a more general, inherent wastefulness, that stems from the internal issues of contemporary human civilization.

The bug


Our mainstream economic system is oriented towards maximal production and growth. This effectively means that participants are forced to maximize their portions of the cake in order to stay in the game. It is therefore necessary to insert useless and even harmful "tumor material" in one's own economical portion in order to avoid losing one's position. This produces an ever-growing global parasite fungus that manifests as things like black boxes, planned obsolescence and artificial creation of needs.

Using a software development metaphor, it can be said that our economic system has a fatal bug. A bug that continuously spawns new processes that allocate more and more resources without releasing them afterwards, eventually stopping the whole system from functioning. Of course, "bug" is a somewhat normative term, and many bugs can actually be reappropriated as useful features. However, resource leak bugs are very seldom useful for anything else than attacking the system from the outside.

Bugs are often regarded as necessary features by end-users who are not familiar with alternatives that lack the bug. This also applies to our society. Even if we realize the existence of the bug, we may regard it as a necessary evil because we don't know about anything else. Serious politicians rarely talk about trying to fix the bug. On the contrary, it is actually getting more common to embrace it instead. A group that calls itself "Libertarians" even builds their ethics on it. Another group called "Extropians" takes the maximization idea to the extreme by advocating an explosive expansion of humankind into outer space. In the so-called Kardashev scale, the developmental stage of a civilization is straightforwardly equated with how much stellar energy it can harness for production-for-its-own-sake.

How the bug manifests in computing


What happens if you give this buggy civilization a virtual world where the abundance of resources grows exponentially, as in Moore's law? Exactly: it adopts the extropian attitude, aggressively harnessing as much resources as it can. Since the computing world is virtually limitless, it can serve as an interesting laboratory example where the growth-for-its-own-sake ideology takes a rather pure and extreme form. Nearly every methodology, language and tool used in the virtual world focuses on cumulative growth while neglecting many other aspects.

To concretize, consider web applications. There is a plethora of different browser versions and hardware configurations. It is difficult for developers to take all the diversity in account, so the problem has been solved by encapsulation: monolithic libraries (such as Jquery) that provide cross-browser-compatible utility blocks for client-side scripting. Also, many websites share similar basic functionality, so it would be a waste of labor time to implement everything specifically for each application. This problem has also been solved with encapsulation: huge frameworks and engines that can be customized for specific needs. These masses of code have usually been built upon previous masses of code (such as PHP) that have been designed for the exactly same purpose. Frameworks encapsulate legacy frameworks, and eventually, most of the computing resources are wasted by the intermediate bloat. Accumulation of unnecessary code dependencies also makes software more bug-prone, and debugging becomes increasingly difficult because of the ever-growing pile of potentially buggy intermediate layers. 

Software developers tend to use encapsulation as the default strategy for just about everything. It may feel like a simple, pragmatic and universal choice, but this feeling is mainly due to the tools and the philosophies they stem from. The tools make it simple to encapsulate and accumulate, and the industrial processes of software engineering emphasize these ideas. Alternatives remain underdeveloped. Mainstream tools make it far more cumbersome to do things like metacoding, static analysis and automatic code transformations, which would be far more relevant than static frameworks for problems such as cross-browser compatibility.

Tell a bunch of average software developers to design a sailship. They will do a web search for available modules. They will pick a wind power module and an electric engine module, which will be attached to some kind of a floating module. When someone mentions aero- or hydrodynamics, the group will respond by saying that elementary physics is a far too specialized area, and it is cheaper and more straight-forward to just combine pre-existing modules and pray that the combination will work sufficiently well.

Result: alienation


The way of building complex systems from more-or-less black boxes is also the way how our industrial society is constructed. Computing just takes it more extreme. Modularity in computing therefore relates very well to the technology criticism of philosophers such as Albert Borgmann.

In his 1984 book, Borgmann uses the term "service interface", which even sounds like software development terminology. Service interfaces often involve money. People who have a paid job, for example, can be regarded as modules that try to fulfill a set of requirements in order to remain acceptable pieces of the system. When using the money, they can be regarded as modules that consume services produced by other modules. What happens beyond the interface is considered irrelevant, and this irrelevance is a major source of alienation. Compare someone who grows and chops their own wood for heating to someone who works in forest industry and buys burnwood with the paycheck. In the former case, it is easier to get genuinely interested by all the aspects of forests and wood because they directly affect one's life. In the latter case, fulfilling the unit requirements is enough.

The way of perceiving the world as modules or devices operated via service interfaces is called "device paradigm" in Borgmann's work. This is contrasted against "focal things and practices" which tend to have a wider, non-encapsulated significance to one's life. Heating one's house with self-chopped wood is focal. Also arts and crafts have a lot of examples of focality. Borgmann urges a restoration of focal things and practices in order to counteract the alienating effects of the device paradigm.

It is increasingly difficult for computer users to avoid technological alienation. Systems become increasingly complex and genuine interest towards their inner workings may be discouraging. If you learn something from it, the information probably won't stay current for very long. If you modify it, subsequent software updates will break it. It is extremely difficult to develop a focal relationship with a modern technological system. Even hard-core technology enthusiasts tend to ignore most aspects of the systems they are interested in. When ever-complexifying computer systems grow ever deeper ingrained into our society, it becomes increasingly difficult to grasp even for those who are dedicated to understand it. Eventually even 
they will give up.

Chopping one's own wood may be a useful way to counteract the alienation of the classic industrial society, as oldschool factories and heating stoves still have some basics in common. In order to counteract the alienation caused by computer technology, however, we need to find new kind of focal things and practices that are more computerish. If they cannot be found, they need to be created. Crafting with low-complexity computer and electronic systems, including the creation of art based on them is my strongest candidate for such a focal practice among those practices that already exist in subcultural form.

The demoscene insight


I have been programming since my childhood, for nearly thirty years. I have been involved with the demoscene for nearly twenty years. During this time, I have grown a lot of angst towards various trends of computing.

Extreme categories of the demoscene -- namely, eight-bit democoding and extremely short programs -- have been helpful for me in managing this angst. These branches of the demoscene are a useful, countercultural mirror that contrasts against the trends of industrial software development and helps grasp its inherent problems.

Other subcultures have been far less useful for me in this endeavour. The mainstream of open source / free software, for example, is a copycat culture, despite its strong ideological dimension. It does not actively question the philosophies and methodologies of the growth-obsessed industry but actually embraces them when creating duplicate implementations of growth-obsessed software ideas.

Perhaps the strongest countercultural trend within the demoscene is the move of focus towards ever tighter size limitations, or as they say, "4k is the new 64k". This trend is diagonally opposite to what the growth-oriented society is doing, and forces to rethink even the deepest "best practices" of industrial software development. Encapsulation, for example, is still quite prominent in the 4k category (4klang is a monolith), but in 1k and smaller categories, finer methods are needed. When going downwards in size, paths considered dirty by the mainstream need to be embraced. Efficient exploration and taming of chaotic systems needs tools that are deeply different from what have been used before. Stephen Wolfram's ideas presented
in "A New Kind of Science" can perhaps provide useful insight for this endeavour.

Another important countercultural aspect of the demoscene is the relationship with computing platforms. The mainstream regards platforms as neutral devices that can be used to reach a predefined result, while the demoscene regards them as a kind of raw material that has a specific essence of its own. Size categories may also split platforms into subplatforms, each of which has its own essence. The mainstream wants to hide platform-specific characteristics by encapsulating them into uniform straightjackets, while the demoscene is more keen to find suitable esthetical approaches for each category. In Borgmannian terms, demoscene practices are more focal.

Demoscene-inspired practices may not be the wisest choice for pragmatic software development. However, they can be recommended for the development of a deeper relationship with technology and for diminishing the alienating effects of our growth-obsessed civilization.

What to do?


I am convinced that our civilization is already falling and this fall cannot be prevented. What we can do, however, is create seeds for something better. Now is the best time for doing this, as we still have plenty of spare time and resources especially in rich countries. We especially need to propagate the seeds towards laypeople who are already suffering from increasing alienation because of the ever more computerized technological culture. The masses must realize that alternatives are possible.

A lot of our current civilization is constructed around the resource leak bug. We must therefore deconstruct the civilization down to its elementary philosophies and develop new alternatives. Countercultural insights may be useful here. And since hacker subcultures have been forced to deal with the resource leak bug in its most extreme manifestation for some time already, their input can be particularly valuable.

Sunday, 14 July 2013

Slower Moore's law wouldn't be that bad.

Many aspects of the world of computing are dominated by Moore's law -- the phenomenon that the density of integrated circuits tends to double every two years. In mainstream thought, this is often equated with progress -- a deterministic forward-march towards the universal better along a metaphorical one-dimensional path. In this essay, I'm creating a fictional alternative timeline to bring up some more dimensions. A more moderate pace in Moore's law wouldn't necessarily be that bad after all.

Question: What if Moore's law had been progressing at a half speed since 1980?

I won't try to explain the point of divergence. I just accept that, since 1980, certain technological milestones would have been rarer and fewer. As a result, certain quantities would have doubled only once every four years instead of every two years. The RAM capacities, transistor counts, hard disk sizes and clock frequencies would have just reached the 1990s level in the year 2000, and in the year 2013, we would be on the 1996 level in regards to these variables.

I'm excluding some hardware-related variables from my speculation. Growth in telecommunications bandwidths, including the spread of broadband, are more related to infrastructural development than Moore's law. I also consider the technological development in things like batteries, radio tranceivers and LCD screens to be unrelated to Moore's law, so their progress would have been more or less unaffected apart from things like framebuffer and DSP logic.

1. Most milestones of computing culture would not have been postponed.

When I mentioned "the 1996 level", many readers probably envisioned a world where we would be "stuck in the year 1996" in all computing-related aspects. Noisy desktop Pentiums running Windows 95s and Netscape Navigators, with users staring in awe at rainbow-colored, static, GIF-animation-plagued websites over landline dialup connections. This tells about mainstream views about computer culture: everything is so one-dimensionally techno-determinist that even progress in purely software- and culture-related aspects is difficult to envision without their supposed hardware prequisities.

My view is that progress in computing and some other high technology has always been primarily cultural. Things don't become market hits straight after they're invented, and they don't get invented straight after they're technologically possible. For example, there were touchscreen-based mobile computers as early as 1993 (Apple Newton), but it took until 2010 before the cultural aspects were right for their widespread adoption (iPad). In the Slow-Moore world, therefore, a lot of people would have tablets just like in our world, despite the fact that they wouldn't probably have too many colors.

The mainstream adoption of the Internet would have taken place in the mid-1990s just like in the real world. 1987-equivalent hardware would have been completely sufficient for the boom to take place. Public online services such as Videotex and BBSes had been available since the late 1970s, and Minitel had already gathered millions of users in France in the 1980s, so even a dumb text terminal would have sufficed on the client side. The power of the Internet compared to its competitors was its global, free and decentralized nature, so it would have taken off among common people even without graphical web browsers.

Assuming that the Internet had become popular with character-based interfaces rather than multimedia-enhanced hypertext documents, its technical timeline would have become somewhat different. Terminal emulators would have eventually accumulated features in the same way as Netscape-like browsers did in the real world. RIPscrip is a real-world example of what could have become dominant: graphics images, GUI components and even sound and video on top of a dumb terminal connection. "Dynamic content" wouldn't require horrible kludges such as "AJAX" or "dynamic HTML", as the dumb terminal approach would have been interactive and dynamic enough to begin with. The gap between graphical and text-based applications would be narrower, as well as the gap between "pre-web" and "modern" online culture.

The development of social media was purely culture-driven: Facebook would have been technically possible already in the 1980s -- feeds based on friend lists don't require more per-user computation than, say, IRC channels. What was needed was cultural development: several "generations" of online services were required before all the relevant ideas came up. In general, most online services I can think of could have taken place in some form or another, about the same time as they appeared in the real world.

The obvious exceptions would be those services that require a prohibitive amount of server-side storage. An equivalent of Google Street View would perhaps just show rough shapes of the buildings instead of actual photographs. YouTube would focus on low-bitrate animations (something like Flash) rather than on full videos, as the default storage space available per user would be quite limited. Client-side video/audio playback wouldn't necessarily be an issue, since MPEG decompression hardware was already available in some consumer devices in the early 1990s (Amiga CD32) and would have therefore been feasible in the Slow-Moore year 2004. Users would just be more sensitive about disk space and would therefore avoid video formats for content that doesn't require actual video.

All the familiar video games would be there, as the resource-hogging aspects of games can generally be scaled down without losing the game itself. It could even be argued that there would be far more "AAA" titles available, assuming that the average budget per game would be lower due to lower fidelity requirements.

Domestic broadband connections would be there, but they would be more often implemented via per-apartment ethernet sockets than via per-apartment broadband modems. The amount of DSP logic required by some protocols (*DSL) would make per-apartment boxes rather expensive compared to the installation of some additional physical wires. In rural areas, traditional telephone modems would still be rather common.

Mobile phones would be very popular. Their computational specs would be rather low, but most of them would still be able to access Internet services and run downloadable third-party applications. Neither of these requires a lot of power -- in fact, every microprocessor is designed to run custom code to begin with. Very few phones would have built-in cameras, however -- the development of cheap and tiny digital camera cells has a lot to do with Moore's law. Also, global digital divide would be greater -- there wouldn't be extremely cheap handsets available in poor countries.

It must be emphasized here that even though IC feature sizes would be in the "1996 level", we wouldn't be building devices from the familiar 1996 components. The designs would be far more advanced and logic-efficient. Hardware milestones would have been more like "reinventing the wheel" than accumulating as much intellectual property as possible on a single chip. RISC and Transputer architectures would have displaced X86-like CISCs a long time ago and perhaps even given way to ingenious inventions we can't even imagine.

Affordable 3D printers would be just around the corner, just like in the real world. Their developmental bottlenecks have more to do with the material printing process itself than anything Moorean. Similarly, the setbacks in the progress of virtual reality helmets have more to do with optics and head-tracking sensors than semiconductors.

2. People would be more conscious about the use of computing resources.

As mentioned before, digital storage would be far less abundant than in the real world. Online services would still have tight per-user disk quotas and many users would be willing to actually pay for more space. Even laypeople would have a rather good grasp about kilobytes and megabytes and would often put effort in choosing efficient storage formats. All computer users would need to regularly choose what is worth keeping and what isn't. Online privacy would generally be better, as it would be prohibitively expensive for service providers to neurotically keep the complete track record of every user.

As global Internet backbones would have considerably slower capacities than local and mid-range networks, users would actually care about where each server is geographically located. Decentralized systems such as IRC and Usenet would therefore never have given way to centralized services. Search engines would be technically more similar to YacY than Google, social media more similar to Diaspora than Facebook. Even the equivalent of Wikipedia would be a network of thousands of servers -- a centralized site would have ended up being killed by deletionists. Big businesses would be embracing this "peer-to-peer" world instead of expanding their own server farms.

In general, Internet culture would be more decentralized, ephemeral and realtime than in the real world. Live broadcasts would be more common than vlogs or podcasts. Much less data would be permanently stored, so people would have relatively small digital footprints. Big companies would have far less power over users.

Attitudes towards software development would be quite different, especially in regards to efficiency and optimization. In the real world, wasteful use of computational resources is systematically overlooked because "no one will notice the problem in the future anyway". As a result, we have incredibly powerful computers whose software still suffers from mainframe-era problems such as ridiculously high UI latencies. In a Slow-Moore world, such problems would have been solved a long time ago: after all, all you need is a good user-level control to how the operating system priorizes different pieces of code and data, and some will to use it.

Another problem in real-world software development is the accumulation of abstraction layers. Abstraction is often useful during development, as it speeds up the process and simplifies maintenance, but most of the resulting dependencies are a completely waste of resources in the final product. A lot of this waste could be eliminated automatically by the use of advanced static analysis and other methods. From the vast contrast between carefully size-optimized hobbyist hacks and bloated mainstream software we might guess that some mind-boggling optimization ratios could be reached. However, the use and development of such tools has been seriously lagging behind because of the attitude problems caused by Moore's law.

In a Slow-Moore world, the use of computing resources would be extremely efficient compared to current standards. This wouldn't mean that hand-coded assembly would be particularly common, however. Instead, we would have something like "hack libraries": huge collections of efficient solutions for various problems, from low-level to high-level, from specific to generic. All tamed, tested and proven in their respective parameter ranges. Software development tools would have intelligent pattern-matchers that would find efficient hacks from these libraries, bolt them together in optimal arrangements and even optimize the bolts away. Hobbyists and professionals alike would be competing in finding ever smarter hacks and algorithms to include in the "wisdombase", thus making all software incrementally more resource-efficient.

3. There would still be a gap between digital and "real" content.

Regardless of how efficently hardware resources are used, unbreakable limits always exist. In a Slow-Moore world, for instance, film photography would still be superior in quality to digital photography. Also, since the digital culture would be far more resource-conscious, large resolutions wouldn't even be desirable in purely digital contexts.

Spreading "memes" as bitmap images is a central piece of today's Internet culture. Even snippets of on-line discussions get spread as bitmapped screenshots. Wasteful, yes, but compatible and therefore tolerable. The Slow-Moore Internet would probably be much more compatible with low-bit formats such as plaintext or vector and character graphics.

Since the beginning of digital culture, there has been a desire to import content from "meatspace" into the digital world. At first, people did it in laborous ways: books were typed into text files, paintings and photographs were repainted with graphics editors, songs were covered with tracker programs. Later, automatic methods appeared: pictures could be scanned, songs could be recorded and compressed into MP3-like formats. However, it took some time before straight automatic imports could compete against skillful manual effort. In low resolutions, skillful pixel-pushing still makes a difference. Synthesized songs take a fraction of the space of an equivalent MP3 recording. Eventually, the difference diminished, and no one longer cared about it.

In a Slow-Moore world, the timeline of digital media would have been vastly different. A-priori-digital content would still have vast advantages over imported media. Artists looking for worldwide appreciation via the Internet would often choose to take the effort to learn born-digital methods instead of just digitizing their analog works. As a result, many traditional disciplines of computer art would have grown enormous. Demoscene and low-bit techniques such as procedural content generation and tracker-like synthesized music would be the mainstream norm in the Internet culture instead of anything "underground".

Small steps towards photorealism and higher fidelity would still be able to impress large audiences, as they would still notice the difference. However, in a resource-conscious online culture, there would also probably be a strong countercultural movement against "high-bit" -- a movement seeking to embrace the established "Internet esthetics" instead of letting it be taken over and marginalized by imports.

Record and film companies would definitely be suing people for importing, covering and spreading their copyrighted material. However, they would still be able to sell it in physical formats because of their superior quality. There would also be a class of snobs who hate all "computer art" and all the related esthetic while preferring "real, physical formats".

4. Conclusion

A Slow-Moore world would be somewhat "backwards" in some respects but far more sensible or even more advanced in others. As a demoscener with an ever-growing conflict against today's industry-standard attitudes, I would probably prefer to live with a more moderate level of Moorean inflation. However, a Netflix fan who likes high-quality digital photography and doesn't mind being in surveillance would probably choose otherwise.

The point in my thought experiment was to justify my view that the idea of a linear tech tree strongly tied to Moore's law is a banal oversimplification. There are many other dimensions that need to be noticed as well.

The alternative timeline may also be used as inspiration for real-world projects. I would definitely like to see whether an aggressively optimizing code generation tool based on "hack libraries" could be feasible. I would also like to see the advent of a mainstream operating system that doesn't suck.

Nevertheless: Down with Moore's law fetishism! It's time for a more mature technological vision!

Saturday, 5 January 2013

I founded a new "oldschool" computer magazine.

Maybe it's a sensible time to tell a bit what I've been up to for the past few months.

In September 2012, I founded Skrolli, a new Finnish computer magazine. This turn in my life surprised even myself.

It started from an image that went viral. Produced by my friend CCR with a lot of ideas from me, it was a faux magazine cover speculating what the longest-living Finnish home computing magazine, MikroBitti, would be like today if it had never renewed itself after the eighties. The magazine happens to be somewhat iconic to those Finns who got immersed to computing before the turn of the millennium, so it reached some relevant audience quite efficiently.

The faux cover was meant to be a joke, but the abundance of comments like "I would definitely subscribe to this kind of magazine" made me seriously consider the possibility of actually creating something like it. I put up a simple web page stating the idea of a new "countercultural" computer magazine that is somewhat similar to what MikroBitti used to be like. In just a few days, over a hundred people showed up on the dedicated IRC channel, and here we are.

Bringing the concept of an oldschool microcomputer magazine to the present era needs some thoughtful reflection. The world has changed a lot; computer hobbyists no longer exist as a unified group, for example. Everyone uses a computer for leisure, and it is sometimes difficult to draw line between those who are interested in the applications and those who are genuinely interested in the technology. Different activities also have their own subcultures with their own communication channels, and it is often hard to relate to someone whose subculture has a very different basis.

Skrolli defines computer culture as something where the computational aspects are irreducible. It is possible to create visual art or music completely without digital technology, for example, but once the computer becomes the very material (like in case of pixel art or chip music), the creative activity becomes relevant to our magazine. Everything where programming or other direct access to the computational mechanisms is involved is also relevant, of course.

I also chose to target the magazine to my own language group. In a nation of six million, the various subcultures are closer to one another, so it is easier to build a common project that spans the whole scale. The continuing existence of large computer hobbyist events in this country might also simplify the task. If the magazine had been started in English or even German, there would have been a much greater risk of appealing only to a few specialized niches.

In order to keep myself motivated, I have been considering the possibility that Skrolli will actually start a new movement. Something that brings the computational aspects of computer entuhsiasm back to daylight and helps the younger generation to find a true, non-compromising relationship with digital technology. Once the movement starts growing on its own, without being tied to a single project, language barriers will no longer exist for it.

I will be busy with this stuff for at least a couple of months until we get the first few issues printed (yes, it will be primarily a paper magazine as a statement against short-living journalism). After that, it is somewhat likely that I will finish the projects I temporarily abandoned: there will probably be a JIT-enabled version IBNIZ, and the IBNIZ democoding contest I promised will be arranged. Stay tuned!

Thursday, 19 April 2012

The relationship between "New Aesthetic" and Computationally Minimal Art

A couple of weeks ago, something called "New Aesthetic" was brought to my attention. It is difficult to find any sort of coherent definition for the idea, but it seems like an umbrella label for a wide variety of visual things that somehow look computational, often in not-so-computational contexts. The main spreader of the meme is apparently a Tumblr blog that collects pictures of things such as pixellated glitches in textiles, real-life voxel sculptures, mugs decorated with website graphics, digitally glitched photographs, satellite images as well as all kinds of other things that evocate suitably futuristic associations.



Despite the profound vagueness of the umbrella term, it is not difficult to notice the general trend it refers to. Just a decade ago, a computationally inspired real-life object would have been a unique novelty item, but nowadays there are such things all around us. I mentioned an aspect of this trend back in 2010 in my article on Computationally Minimal Art, where I noticed that "retrocomputing esthetics" is not just thriving in its respective subcultures (such as demoscene or chip music scene) but popping up every now and then in mainstream contexts as well -- often completely without the historical or nostalgic vibe usually associated with retrocomputing.

As the concept of "New Aesthetic" overlaps a lot of my ponderings, I now feel like building some semantics in order to relate the ideas to one another:

"New Aesthetics", as I see it, is a rather vague umbrella term that contains a wide variety of things but has a major subset that could be called "Computationally Inspired".

"Computationally Inspired" is anything that brings the concepts and building blocks of the "digital world" into non-native contexts. T-shirts, mugs and other real-life objects decorated with big-pixel art or website imagery are obvious examples. In a wide sense, even anything that makes the basic digital building blocks more visible within a digital context might be "Computationally Inspired" as well: big-pixel low-fi computer graphics on a new high-end computer, for example.

"Computationally Minimal" is anything that uses a very low amount of computational resources, often making the digital building blocks such as pixels very discernible. Two years ago, I defined "Computationally Minimal Art" as follows: "[A] form of discrete art governed by a low computational complexity in the domains of time, description length and temporary storage. The most essential features of Computationally Minimal Art are those that persist the longest when the various levels of complexity approach zero."

We can see that Computationally Inspired and Computationally Minimal have a lot of overlap but neither is a subset of another. Cross-stitch patterns are CM almost by definition as they have a limited number of discrete "pixels" with a limited number of different colors, but they are not CI unless they depict something that comes from the "computer world", such as video game characters. On the other hand, a sculpture based on a large amount of digitally corrupted data is definitely CI but falls out of the definition of CM due to the size of the source data.

What CM and CI and especially their intersection have in common, however, is the tendency of showing off discrete digital data and/or computational processes, which gives them a lot of esthetic similarity. In CI, this is usually a goal in itself, while in CM, it is most often a side-effect of the related goal of low computational complexity. In either case, however, the visual result often looks like big-pixel graphics. This has caused confusion among many New Aesthetic bloggers who use adjectives such as "retro", "8-bit" or "nostalgic" when referring to this phenomenon, when what they are witnessing is just a way how the essence of digital technology tends to manifest visually.

There has been a lot of on-line discussion revolving New Aesthetic during the past month, and a lot of it seems like pseudo-intellectual, reality-detached mumbo-jumbo to me. In order to gain some insight and substance, I would like to recommend all the bloggers to take serious a look into the demoscene and other established forms of computer-centric expression. You may also find out that a lot of this stuff is actually not that new to begin with, it has just been gaining a lot of new momentum recently.