Working notes from the Scandinavian Institute for Computational Vandalism

The ‘dawning’ of an aspect

“I contemplate a face, and then suddenly notice its likeness to another. I see that it has not changed; and yet I see it differently. I call this experience “noticing an aspect…” And I must distinguish between the ‘continuous seeing’ of an aspect and the ‘dawning’ of an aspect… I see two pictures, with the duck-rabbit surrounded by rabbits in one, by ducks in the other. I do not notice that they are the same. Does it follow from this that I see something different in the two cases? It gives us a reason for using this expression here. “ I saw it quite differently, I should never have recognized it!” Now, that is an exclamation. And there is also a justification for it. I should never have thought of superimposing the heads like that, of making this comparison between them…. I describe the alteration (change of aspect) like a perception; quite as if the object had altered before my eyes…. The expression of a change of aspect is the expression of a new perception and at the same time of the perception’s being unchanged. I suddenly see the solution of a puzzle-picture.”

Ludwig Wittgenstein, Philosophical Investigations.

(more…)

Every annotation is an act of vandalism

Asger Jorn reading Sartre’s L’Imaginaire. Questioning Sartre’s floating use of the term intentionality regarding the photograph. The fragment describes ambiguously the photograph or its content (the characters it depicts) as an object without particular intentionality.

Jorn sees intentionality potentially elsewhere: “Intentionality on whose part, the artist?”

A vision substitution system

Two illustrations of Bach y Rita’s vision substitution system, in 1969.

“Four hundred solenoid stimulators are arranged in a twenty x twenty array built into a dental chair. The stimulators, spaced 12 mm apart, have 1 mm diameter “Teflon” tips which vibrate against the skin of the back (Fig. 1). Their on-off activity can be monitored visually on an oscilloscope as a two-dimensional pictorial display (Fig. 2). The subject manipulates a television camera mounted on a tripod, which scans objects placed on a table in front of him. Stimuli can also be presented on a back-lit screen by slide or motion picture projection. The subject can aim the camera, equipped with a zoom lens, at different parts of the room, locating and identifying objects or persons.
Six blind subjects have undergone extensive training and testing with the apparatus.”
[…]
“Our subjects spontaneously report the external localization of stimuli, in that sensory information seems to come from in front of the camera, rather than from the vibrotractors on their back. Thus after sufficient experience, the use of the vision substitution system seems to become an extension of the sensory apparatus.”

Read the full report

Zero bandwidth video

MIT’s experiments in the 80’s to give voice synthesizers a face.

The cloudy days of machine learning

Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set—output “yes” for the 50 photos of camouflaged tanks, and output “no” for the 50 photos of forest. This did not ensure, or even imply, that new examples would be classified correctly. The neural network might have “learned” 100 special cases that would not generalize to any new problem. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.

It turned out that in the researchers’ dataset, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.

Read Jeff Kaufman tracking the source of this story.

Convolutional Network, 1993

In 1988, Yann LeCun joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, United States, headed by Lawrence D. Jackel, where he developed a number of new machine learning methods, such as a biologically inspired model of image recognition called Convolutional Neural Networks, the “Optimal Brain Damage” regularization methods, and the Graph Transformer Networks method, which he applied to handwriting recognition and OCR. The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s. Today LeCun is director of Facebook AI Research in New York City.
(edited from Wikipedia)

More demos on LeCun’s website.

Oscilloscope party

“Tennis for Two was first introduced on October 18, 1958, at one of the Lab’s annual visitors’ days. Two people played the electronic tennis game with separate controllers that connected to an analog computer and used an oscilloscope for a screen. The game’s creator, William Higinbotham, was a nuclear physicist who lobbied for nuclear nonproliferation as the first chair of the Federation of American Scientists.”

Read more

Geoff Cox, from Speaking Code to algorithms


Geoff Cox discussing the ecology of algorithms at the occasion of the launch of the Cqrrelations website.

“What would algorithms say if they could speak? We could say the same of data of course. If it was allowed to speak what would it say about itself? It probably wouldn’t say it is raw and unmediated. It would obviously give us a lot of detail on these processes of mediation.”

Download and listen

Algorithms before computers

Algorithms before computers

“The only way you could formulate a complete rule (in premodern sensibility): you had to foresee the exceptions, it is both specific and supple. The habit doesn’t simply enforce the rule, it embodies it, just like this spearbearer statue embodies the canon of male beauty. More than that the habit’s discretion is not supplementary to the rule, it is part of the rule. It is the leaden ruler that adjusts the straight iron ruler to the curves of the individual case.”

Watch Lorraine Daston on Algorithms before Computers

The Variability of Vision

LATE IN 1967 a book was published in England which is as charming as it must be fascinating to all who are interested in the theory and practice of interpretation. Its principal author is the Dutch naturalist and ethologist Niko Tinbergen, who combined with a friend and artist to photograph the tracks left in the sand of the dunes by a variety of creatures and to reconstruct the stories they reveal in word and in image. The illustration I selected (Figure 1) shows the tracks of an oyster catcher peacefully walking along over the dunes till something apparently alarmed it, the walk turned into a hop, leaving deeper imprints in the sand, and it took off on its wings. This is not all the naturalists could infer from the configuration of the sand. They know that a bird cannot take off except precisely against the wind. At the time of the event, therefore, the wind must have blown from the left of the picture. But if you observe the ripples of the sand, they were formed by a wind coming from the direction of the camera. Accordingly, the tracks correctly interpreted reveal another story of the past: there was a change of wind between the formation of the ripples and that of the footprints. Not all of the picture illustrated is a photograph. What the artist has done is to superimpose on it his reconstruction of the oyster catcher taking off. This is how it appears to his mind’s eye and how, he is sure, it would have looked to the camera if one had been present at the moment.

E. H. Gombrich, The Evidence of Images: I The Variability of Vision C.S. Singleton (ed), Interpretation: Theory and Practice, 1969, pp.35-68
[Trapp no.1969C.1]