Working notes from the Scandinavian Institute for Computational Vandalism

Zooming in on Sciaparelli components on Mars

screenshot-from-2016-11-24-10-36-38

“The erroneous information generated an estimated altitude that was negative – that is, below ground level,” the ESA said in a statement.

“This in turn successively triggered a premature release of the parachute and the backshell [heat shield], a brief firing of the braking thrusters and finally activation of the on-ground systems as if Schiaparelli had already landed. In reality, the vehicle was still at an altitude of around 3.7km (2.3 miles).”

The €230m ($251m) Schiaparelli had spent seven years travelling 496m kilometres (308m miles) onboard the so-called Trace Gas Orbiter to within a million kilometres of Mars when it set off on its own mission to reach the surface.

Source: https://www.theguardian.com/science/2016/nov/24/mars-lander-smashed-into-ground-at-540kmh-after-misjudging-its-altitude

NASA’s Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment (HiRISE) imaged the ExoMars Schiaparelli module’s landing site on 25 October 2016, following the module’s arrival at Mars on 19 October.

The zoomed insets provide close-up views of what are thought to be several different hardware components associated with the module’s descent to the martian surface. These are interpreted as the front heatshield, the parachute and the rear heatshield to which the parachute is still attached, and the impact site of the module itself.

In the image, north is up; west to the left. Schiaparelli was travelling from west to east. The image scale is 29.5 cm/pixel. The brightness of the individual zooms have been adjusted to best reveal the features against the martian surface in each case.

The 100 m scale bar in the main image is only indicative, as the HiRISE image was taken at an oblique angle. The distances given between the various components in the main text have been corrected for this effect.

Image source: http://www.esa.int/spaceinimages/Images/2016/10/Zooming_in_on_Schiaparelli_components_on_Mars

http://sicv.activearchives.org/toolbox/Zooming_in_on_Schiaparelli_components_on_Mars.html

Texture mapping

3a_img_4460web_toolbox02

Code: http://gitlab.constantvzw.org/SICV/toolbox

Universal Slide Show

screenshot-from-2016-11-10-14-47-35

https://babelia.libraryofbabel.info/

Part of the larger http://libraryofbabel.info/ project.

Accessorize to a crime

1064-png

A team of researchers from Pittsburgh’s Carnegie Mellon University have created sets of eyeglasses that can prevent wearers from being identified by facial recognition systems, or even fool the technology into identifying them as completely unrelated individuals.

In their paper, Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, presented at the 2016 Computer and Communications Security conference, the researchers present their system for what they describe as “physically realisable” and “inconspicuous” attacks on facial biometric systems, which are designed to exclusively identify a particular individual.
Half of US adults are recorded in police facial recognition databases, study says
Read more

The attack works by taking advantage of differences in how humans and computers understand faces. By selectively changing pixels in an image, it’s possible to leave the human-comprehensible facial image largely unchanged, while flummoxing a facial recognition system trying to categorise the person in the picture.

Source: https://www.theguardian.com/technology/2016/nov/03/how-funky-tortoiseshell-glasses-can-beat-facial-recognition

Research Paper: Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
Sharif, Bhagavatula, Bauer, Reiter
https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

NSFW “dreams”

In September 2016, engineers at Yahoo released a machine learning model trained to detect “NSFW” images. The acronym NSFW, “not safe for work”, is net slang for images with typically nudity or graphic sexual content. Recently (October 2016) PHD research Gabriel Goh has released experiments applying a means of generating images from such models.

https://github.com/yahoo/open_nsfw
null

Detecting offensive / adult images is an important problem which researchers have tackled for decades. With the evolution of computer vision and deep learning the algorithms have matured and we are now able to classify an image as not suitable for work with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

Since images and user generated content dominate the internet today, filtering nudity and other not suitable for work images becomes an important problem. In this repository we opensource a Caffe deep neural network for preliminary filtering of NSFW images.

https://open_nsfw.gitlab.io/

What makes an image NSFW, according to Yahoo? I explore this question with a clever new visualization technique by Nguyen et al.. Like Google’s Deep Dream, this visualization trick works by maximally activating certain neurons of the classifier. Unlike deep dream, we optimize these activations by performing descent on a parameterization of the manifold of natural images. This parametrization takes the form of a Generative Network, trained adversarially on an unrelated dataset of natural images.



This divine scanner

al-haytham-new_optics_03

In How One Sees, Siegfried Zielinski and Franziska Latell trace the genealogy of vision. Quoting the philosopher Abu Nasr Al-Farabi (870-950), the authors write:

Optics teach “according to the true circumstances of what is looked at to find the matter, the quantity, form, position and order and the other things which of the things is where the gaze can be mistaken […] That is done with an object via an instrument, which serves to direct the gaze in such a way that it does not err”.[…] Al-Farabi assumes an active ray of sight; it carries light from inside the eye or human body and is beamed through the eye at external objects to scan them for perception. This divine scanner is one of the foundations of the Platonic world-view.

The image above is a sketch illustrating the theories of Ibn al-Haytham, astrophysicist, inventor of the camera obscura and keen moon-gazer for whom the light is emitted from luminescent objects and find their way to the retina. Al-Haytham’s model contradicts the active ray of sight’s model and will influence medieval perspectivists and aid the development of perspectival representation.

How One Sees, Siegfried Zielinski and Franziska Latell in Variantology 4: On Deep Time Relations of Arts, Sciences and Technologies In the Arabic-Islamic World and Beyond (Kunstwissenschaftliche Bibliothek), 2011.

Situological

Very interested in the situological and situographical development of topology. It will be necessary to stay rapidly informed of all the scientific conclusions about this — and to adapt or détourn them. The primary force of our position is to intervene therein as an artistic activity (with a game of gestures raised to the dignity of art) whereas the former dominant tendency was toward objective observation.

Guy Debord in letter to Asger Jorn, July 6 1960

Guy Debord: Correspondence: The Foundation of the Situationist International (June 1957 — August 1960). Translated by Stuart Kendall and John McHale. Semiotext(e), 2009.

Funes the Memorious

He was, let us not forget, almost incapable of ideas of a general, Platonic sort. Not
only was it difficult for him to comprehend that the generic symbol dog
embraces so many unlike individuals of diverse size and form; it bothered
him that the dog at three fourteen (seen from the side) should have the same
name as the dog at three fifteen (seen from the front). His own face in the
mirror, his own hands, surprised him every time he saw them.

With no effort, he had learned English, French, Portuguese and Latin.
I suspect, however, that he was not very capable of thought. To think is to
forget differences, generalize, make abstractions. In the teeming world of
Funes, there were only details, almost immediate in their presence.

from Funes the Memorious, Jorge Luis Borges, Labyrinths, Translated by J.E.I., pp 69-74

(more…)

Correspondences

doll-001

doll-002

Binford, O.T. & Nevatia, R (1977) Description and Recognition of Curved Objects, Artificial Intelligence 8(1):77-98.

Native contours

hochberg-figure
In this experiment [Hochberg & Brooks, 1962], a human baby was raised until the age of 19 months under the constant supervision of his parents who avoided exposing the child to line-drawings or two-dimensional pictures of any kind. Although the baby accidentally had opportunities to glance at some pictures on a few occasions, at no point was the content of a picture ever named to him or was other attention drawn to it. All of the baby’s playthings were chosen so that they had solid coloring and no two-dimensional patterning of any kind. Finally, at the age of 19 months the child was shown some line-drawings for the first time, including those illustrated in Figure 1-3. The child was immediately to recognize objects in these drawings with no reported difficulty, and performed equally well when identifying the contents of black-and-white photographs.

Found in: Lowe, D.G. (1985) Perceptual Organization and Visual Recognition , http://www.dtic.mil/dtic/tr/fulltext/u2/a150826.pdf