Aad Björkro on Sat, 26 Oct 2019 15:32:19 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> Algorithms that Matter Symposium 2020: Call for contributions


Hi Hanns, Francis, Everyone

(Hanns Holger Rutz): [...] classical works on computation by Turing, von Neumann etc. were based on the idea that data=""
Would this not be code=data? This is one aspect which has always fascinated me, the strange loop of executable data, which of course is so vulnerable to abuse a large part of why we need operating systems is the need of making sure this is not the case when users would not expect it to be -- protected parts of elf stacks, etc.
 
[...] reminded with Hans-Jörg Rheinberger that the term should actually be 'fact' ('made') not 'data' ('given').
This is very interesting, and a very astute distinction. I recently wrote a bit about the etymology of :fact -- from faciō, meaning I do, construct, compose -- where its supine form, factum, was made, became a noun by sentences such as "Latrocinium modo factum est" ("A larceny just happened", from wiktionary) -- the larceny had been made fact. I imagine this is repeating a lot of what Hans-Jörg might have written about, datum being much the same I think, from dare, give, to made given, was/is given in supinum. It is a fascinating separation, thank you for mentioning it, I am sure I will have a lot of use for it!

(Francis Hunger): What has not been seen as worthy to discuss is the long history of office and production automation [...] because it often relates not to the academic side. It relates to deeply embedded daily practices.
The applications you listed; Enterprise, Customer and Supply Chain Management, are often handled by some form of domain specific language, DSL. Probably by some factory pattern in Java or C#, but even pure OO has almost the same properties if a bit less flexible, and functional programming is essentially 100% DSL if you look at it long enough. This implies that the information models are usually embedded, to be operated algorithmically, so the distinction is not very clear to me, unlike the academic side of algorithms where data is data, both code and data usually have added semantics in these practices, specifically to make the models more apparent.

(Hanns Holger Rutz): [...] not to talk of "the algorithm", indeed I have looked a while back at the principles of "its" reification ...
(Francis Hunger): it is worth to look into data and the information-model much more, since obviously with the rise of Pattern Recognition (aka AI) and databases these are spaces that can be contested and subverted to a larger extend than I have seen until today.  
But mashing the two perspectives is interesting -- would something like a DSL, which essentially serves to type an information model, constitute a reification of data? If we consider the material in this domain to be what is actionable, as an abstraction of executable, I'd say that statistical models might fit this description. Bayesian stats., Markov chains or regressive classification do parameterize data to be actionable, and while perhaps adjusting for bias usually seem to do so in the most reductive form possible (not a statistician - grain of salt).

But more interesting is deep learning, because their output is not data, but indeed computational graphs, code. Data -> Code, proper. Tensorflow for instance has a warning in its documentation to be careful with untrusted sources, as saved models are executables with the same permission as the tensorflow process (some of the vulnerability, like network access, I think is due to the framework's sugar, though).
https://www.tensorflow.org/tutorials/keras/save_and_load
https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md

And this is the purpose, of course; to be the ultimate machine -- taking as input information without a model, outputting computation manipulating an unknown model. The obvious concern here is that we never need to understand the model, barely have to understand the input, and subsequently do not often seem to understand the output well. When used for spectacles, like playing games or impersonating people, it is quite easy to discern what its result is because it can only be 'spectacular or not spectacular', convincing or not convincing, but when we start using these processes to generate models of data for us, which I am sure we will, how these will be considered is very concerning. Will we trust the output because it is spectacular, convincing, with little understanding of what it is? Imagine not only the biases we might be blind to because we have them ourselves - racism, sexism, elitism etc - but also all further biases we are blind to because we have never even considered them before.

A future in the hands of the algorithms' algorithms.

Best regards,
Aad Björkro

On Fri, Oct 25, 2019 at 10:30 AM Francis Hunger <francis.hunger@irmielin.org> wrote:
Dear Hans, dear All,
I
prefer to think of "the algorithmic" as the specific medium of
computation (what Luciana Parisi would call 'mode of thought', I guess).
I would contest that. I have for a long time believed in it myself. The academic world is obsessed with writing about Turing and about von Neumann and about Cybernetics, and "'the algorithmic' as the specific medium of computation". And I think it shows in how the field of the computational is discussed today.


What has not been seen as worthy to discuss is the long history of office and production automation, which is less heroic than the above (although not completely disconnected),  however in my understanding in the end much more influential to how computing today is shaped. It is however a much overlooked story, because it often relates not to the academic side. It relates to deeply embedded daily practices. JoAnne Yates and Thomas Haigh have written about it, for instance.

The conceptual problem from my perspective is: if the claim is to reflect on "the computational" of today, the question is then, what kind of computational are we talking about. Are we talking about concepts of the 1930s–1960s which have influenced _early_ computing, or are we talking about today's everyday practices like the use of infrastructure (banking, water supply, public transport, websites etc.) and software applications that shape these practices, beginning with well-known software such as graphic-, sound- and video-design towards the less known, deeply embedded Enterprise Resource Management, Customer Relationship Management, Supply Chain Management.

These are obviously overlapping yet distinct topical fields, so my intervention is not only about the question whether to concentrate on algorithms or the algorithmic, it is also about the question, what are the topical fields which get currently discussed. In festivals, in academic conferences and in general public. Can they be tracked back to the 'the algorithmic' as the specific medium of computation? I don't think so and suggest to talk more about data-information model-algorithm. Maybe it is not the task of a music centered festival and I was simply mislead by the call for "Algorithms that Matter".

This is to say, that the discussion reaches beyond this specific call alone. It also occured to me, to just name another instance, reading Matteo Pasquinellis recent and very relevant: Three Thousand Years of Algorithmic Rituals: The Emergence of AI from the Computation of Space. (That would lead to far now, to discuss it.) And there are many more.


When we conceived ALMAT back in 2015 (?), we were very much thinking of
the heritage of computer (sound) art, and how things have somehow
shifted in the past years in terms of the role of computation, with
'mattering' of course having the double reference of matter/meaning,
referring thus, among other things, to the physical world, but also
various discourses such as "new materialism". In this way, we would
never assume a clear fissure between algorithm/data, and already the
classical works on computation by Turing, von Neumann etc. were based on
the idea that data="" Last not least, let's be reminded with
Hans-Jörg Rheinberger that the term should actually be 'fact' ('made')
not 'data' ('given').

In any case, we were mostly interested in coding practices, retroaction
and speculative reason, and so 'data' was never in the focus of our
attention (along with 'big data', 'machine learning', 'models' etc.). We
don't discount it, but simply approach the theme from artistic practice
as writing processes. Logic, classical cybernetics and information
theory are all important for this, but form only part of the truth.

I have a few objections, though. For example you write: "If we for
instance look into how bias enters software, we usually won't find much
in algorithms". This of course depends on the definition of bias. If you
take a step back and look at "computational thinking" as a world view,
then the bias is there from the very beginning in the very conception of
the types of objects we're dealing with, so I think this very much
applies to algorithms as well. With all the discourse on algorithmic
governance, algorithmic ethics and so on, we've become accustomed to
think that's just about creating 'balanced' models and data sets, but
this is too short-sighted. We need to question the entire axioms of
communication/control metaphors.
I agree. I think however, it is worth to look into data and the information-model much more, since obviously with the rise of Pattern Recognition (aka AI) and databases these are spaces that can be contested and subverted to a larger extend than I have seen until today.


This discussion is very important. Would you mind if I add it to the
symposium RC page?
Of course. If it's on nettime, it's public anyway.
All the best,
Francis


Best,

.h.h.


On 10/10/2019 23:20, Francis Hunger wrote:
Hi Hanns and everybody,
Rather than understanding algorithms as existing and transparent tools,
the ALMAT Symposium is interested in their genealogical, processual
aspects and their transformative potential. We seek critical approaches
that avoid both mystification and commodification, that aim at opening
the black box of "wonder" that is often presented to the public when
utilising algorithms.
That's very much needed. And I think there is a conceptual problem,
which this conference shares with many others that talk about "the
algorithm".

I agree, that the specialized field of generative art concentrates on
algorithms (that generate the visual or auditive experience) and that
algorithms on a larger scale matter in optimization (like b-tree
sorting, fast gradient step method in pattern recognition).

However from a perspective of "gray media" (Fuller/Goffey), "logistical
media" (Rossiter) on the one hand, and "habitual media" (Wendy Hui Kyong
Chun) on the other, I think "algorithm" is wrong terminology.
Approaching it from a perspective of the database and referring to
actual practices of application programming I would argue, that
algorithms are a minor issue.

Of much more importance is the information model. The information model
is usually the decision, which information and subsequently data, should
be included into the processable reality of computing, and what to
exclude. In short: data is, what gets included according to the
information model. Everything else is non-data or non-existent (under
the closed world assumption) to the computer.

So if you aim to look into the genealogy of algorithms, you may look
into mathematics and maybe operational reserch. You will however miss
out on looking at the genealogy of _data_ and the material qualities of
the _information model_.

If we for instance look into how bias enters software, we usually won't
find much in algorithms. A b-tree sorting or the training of a neural
network is always tied to weights, and actually needs and creates bias.
Since a computer can not understand meaning, meaning needs to be
ascribed (through classification), which is done by the mentioned
algorithms moving numerical weights towards a certain result that is
meaningful to humans.

Much more relevant for the question of bias is, how the _information
model_ is organized, because it inscribes the reality of the computable.
Much more relevant is the question of how _data_ is collected, curated
und used, as we can see in the current projects of Adam Harvey
(https://megapixels.cc/) or !Mediengruppe Bitnik
(https://werkleitz.de/en/ostl-hine-ecsion-postal-machine-decision-part-1),
or the Data Workers Union (https://dataworkers.org/).

I get, that 'algorithm' is often used as common notion, in a similar
blurry way as is 'digital'. However a stronger concern for the
information model and for data would open up the avenue for a stronger
political stance, since it looks into who decides about inclusion and
exclusions, and how these decisions are shaped. I'm talking about
identifying addressable actors who are being hold responsible.

So let's look further into the trinity: information
model–––data–––algorithm (and the infrastructure in and around it).

best

Francis



    
-- 

http://www.irmielin.org
http://databasecultures.irmielin.org
#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: