Building a social detector, Mass Observation

By Professor Julian Huxley

Science in its progress is advancing nearer to the human heart of things, The first great advances in the scientific renaissance were made in the remoter and simpler fields of astronomy and physics. Then followed chemistry and a little later general biology and physiology. The great revolution in regard to individual psychology did not take place until well into the present century. Now it is the tum of the most complex of all the sciences, sociology, which is also the nearest home, since we live immersed in society as a fish in water, and our ways of thinking and
feeling are molded by the social framework.

Within the social sciences, social anthropology holds an essential place. Yet, with few exceptions, it has started to choose its material from among primitive and out-of-the-way peoples. Here again the trend must be from the remote to the near at hand. Not only scientifically but practically it is urgent to obtain detailed and unbiased information as to the mode of thinking of the larger, more powerful and economically more important groups of human beings. Most urgent of all is to obtain such knowledge about our own group, the English people.


a close listening to the technical workings of computers and their networks

In Algorhythmics: Understanding Micro-Temporality in Computational Cultures, Shintaro Miyazaki discusses the importance of rhythm to understand the performances of algorithms.

“According to Burton [chief engineer of the Manchester Small-Scale Experimental Machine-Resurrection-Project], the position of the so-called “noise probe” was variable, thus different sound sources could be heard and auscultated. These could be individual flip-flops in different registers, different data bus nodes or other streams and passages of data traffic. Not only was passive listening of the computer-operations very common, but was also an active exploration of the machine, listening to its rhythms. Burton continues,

“Very common were contact failures where a plug-in package connector to the ‘backplane’ had a poor connection. These would be detected by running the Test Program set to continue running despite failures (not always possible, of course), and then listening to the rhythmic sound while going round the machine tapping the hardware with the fingers or with a tool, waiting for the vibration to cause the fault and thus change the rhythm of the ‘tune’.”

The routine of algorhythmic listening was widespread, but quickly forgotten. One convincing reason for the lack of technical terms such as algorhythm or algorhythmic listening is the fact that the term algorithm itself was not popular until the 1960s. Additionally, in the early 1960s many chief operators and engineers of mainframe computers were made redundant, since more reliable software-based operators called operating systems were introduced. These software systems were able to listen to their own operations and algorhythms by themselves without any human intervention. Scientific standards and rules of objectivity might have played an important role as well, since listening was more an implicit form of knowledge than any broadly accepted scientific method. Diagrams, written data or photographic evidence were far more convincing in such contexts. Finally, the term rhythm itself is rarely used in the technical jargon – instead the term ‘clock’ or ‘pulse’ is preferred.”

Read the full paper

This practice of close listening has been documented at the Natuurkundig Laboratorium in Eindhoven. As explained on Ip’s Ancient Wonderworld:

“The head of the NatLab at the time, a certain Natuurkundig Laboratorium in EindhovenW. Nijenhuis, had the idea of installing a small amplifier and loudspeaker on the PASCAL, which would pick up radio frequency interference generated by the machine. Unsurprisingly, the usual course of events unfolds: While actually intended for diagnostic purposes, people quickly discovered that they could abuse the speaker to make simple music. But Mr Nijenhuis, rather than scolding his staff for the waste of precious calculation time, actually decides to record those rekengeluiden (computing noises) on a 45 rpm vinyl.”

Read more

Listen to the recordings

The blue eyes of the argopecten irradians

Bay Scallop, Scientific Name: Argopecten irradians

Common Name: Bay Scallop, Scientific Name: Argopecten irradians, Specimen #: 27, Size: 2.5 inches acBay Scallop, Scientific Name: Argopecten irradianross (side to side), Notes: specimens ordered from Gulf Specimen Marine Laboratories: (850) 984-5297,, collected near Panacea, Florida, Location: David Liittschwager’s Home Studio: 120 Parnassus Ave #1, San Francisco, CA, 94117

“The mantle of the bay scallop (Argopecten irradians) is festooned with up to 100 brilliant blue eyes. Each contains a mirrored layer that acts as a focusing lens while doubling the chance of capturing incoming light.”

Read the National Geographic’s history of the eye’s evolution.

Masks for algorithms

“They (Apple engineering teams) have even gone and worked with professional mask makers and makeup artists in Hollywood to protect against these attempts to beat Face ID. These are actual masks used by the engineering team to train the neural network to protect against them in Face ID. It’s incredible!”, Phil Schiller said (Apple’s Keynote September 2017, from 1:27:10 to 1:27:26).

Mr. Ngo Tuan Anh, Bkav’s Vice President of Cyber Security, said: “The mask is crafted by combining 3D printing with makeup and 2D images, besides some special processing on the cheeks and around the face, where there are large skin areas, to fool AI of Face ID”.

Shop by image

The company explains it’s using two core parts of artificial intelligence – computer vision and deep learning – to power these new features. When images are uploaded to eBay, it uses a deep learning model called a convolutional neural network to process the images. The images are compared to the site’s live listings, ranked based on visual similarity, then returned quickly using eBay’s open-source Kubernetes platform.

Although visual search is a trendy new way to shop online, its overall usefulness remains unproven in the long run. Retailers aren’t releasing stats about how many image searches have actually translated into sales, and arguably, the results can be hit or miss.

One of the problems is that when you’re looking at a fashion item in a photo, many times you want to find something just like it – not something only “visually similar.”

That is, there are times you just want a small, red handbag with a gold chain and any ol’ handbag of this nature will do; then there are times when you want the exact handbag you’re frothing over right now, or something nearly identical.

eBay launches visual search tools that let you shop using photos from your phone or web

The ‘dawning’ of an aspect

“I contemplate a face, and then suddenly notice its likeness to another. I see that it has not changed; and yet I see it differently. I call this experience “noticing an aspect…” And I must distinguish between the ‘continuous seeing’ of an aspect and the ‘dawning’ of an aspect… I see two pictures, with the duck-rabbit surrounded by rabbits in one, by ducks in the other. I do not notice that they are the same. Does it follow from this that I see something different in the two cases? It gives us a reason for using this expression here. “ I saw it quite differently, I should never have recognized it!” Now, that is an exclamation. And there is also a justification for it. I should never have thought of superimposing the heads like that, of making this comparison between them…. I describe the alteration (change of aspect) like a perception; quite as if the object had altered before my eyes…. The expression of a change of aspect is the expression of a new perception and at the same time of the perception’s being unchanged. I suddenly see the solution of a puzzle-picture.”

Ludwig Wittgenstein, Philosophical Investigations.


Every annotation is an act of vandalism

Asger Jorn reading Sartre’s L’Imaginaire. Questioning Sartre’s floating use of the term intentionality regarding the photograph. The fragment describes ambiguously the photograph or its content (the characters it depicts) as an object without particular intentionality.

Jorn sees intentionality potentially elsewhere: “Intentionality on whose part, the artist?”

A vision substitution system

Two illustrations of Bach y Rita’s vision substitution system, in 1969.

“Four hundred solenoid stimulators are arranged in a twenty x twenty array built into a dental chair. The stimulators, spaced 12 mm apart, have 1 mm diameter “Teflon” tips which vibrate against the skin of the back (Fig. 1). Their on-off activity can be monitored visually on an oscilloscope as a two-dimensional pictorial display (Fig. 2). The subject manipulates a television camera mounted on a tripod, which scans objects placed on a table in front of him. Stimuli can also be presented on a back-lit screen by slide or motion picture projection. The subject can aim the camera, equipped with a zoom lens, at different parts of the room, locating and identifying objects or persons.
Six blind subjects have undergone extensive training and testing with the apparatus.”
“Our subjects spontaneously report the external localization of stimuli, in that sensory information seems to come from in front of the camera, rather than from the vibrotractors on their back. Thus after sufficient experience, the use of the vision substitution system seems to become an extension of the sensory apparatus.”

Read the full report

Zero bandwidth video

MIT’s experiments in the 80’s to give voice synthesizers a face.

The cloudy days of machine learning

Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set—output “yes” for the 50 photos of camouflaged tanks, and output “no” for the 50 photos of forest. This did not ensure, or even imply, that new examples would be classified correctly. The neural network might have “learned” 100 special cases that would not generalize to any new problem. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.

It turned out that in the researchers’ dataset, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.

Read Jeff Kaufman tracking the source of this story.