Monday, April 3, 2017

Understanding "Talking Lions" and Other "Black Boxes"


If a lion could speak, we could not understand him.
-- Wittgenstein[1]


Artificial intelligence is everywhere. But before scientists trust it, they first need to understand how machines learn.
-- D Castelvecchi, Can We Open the Black Box of AI? [2]


"With artificial intelligence, we are summoning the demon."
-- Elon Musk (2014) Reported in CNN Tech , Oct 28th.


 
Faust with Homunculus

Black Boxes Abound. Less so, our understandings of them. The "black boxes" are not only lions, or AI devices, but also humans, and even more. If we generalize, keeping in consideration our perceptual limitations, any source of emissions that appear to us to be more than random, from any object or even an "empty" focus of attention, is, initially, a black box.

Many commentators have remarked that Wittgenstein makes an incautious jump from speaking to understanding in the quote cited above. I will generalize their concerns: how do we know that a (what appears to us to be) non-random emission of, say, sound, is speech? Or, if the transmission is not sonic to humans, language? Animals can learn to mimic human emissions. Even non-English recordings, played backwards, can sound like English speech.

To many a monolingual English speaker, the following phrases sound like pieces of English nursery rhymes colored with a non-English accent:
1. French - Un petit d'un petit s'étonnent aux Halles (Humpty Dumpty sat on a wall.)
2. German - Oh wer, oh wer ist Mai Lido doch Gong? (Oh where, oh where is my little dog gone?)
3. Spanish (Caribbean) -- Grima! Sí, comí. Te excusé que rifa. (Christmas is coming, the goose is getting fat.) [3]
Black Boxes are "Understandings." How does Wittgenstein even judge what the lion is doing? What assumptions is he making? Even if the lion's emissions sound like an English sentence, what is relevant in the context? What is the lion doing? Suppose, for example, our friend, Harry, is standing before a lion's cage in which a cave has been constructed as a den. The lion, staring at Harry, emits the sound sequence, /2aym+ gowing+3 hówm1/ [4]

It sounds like the lion just said, "I'm going home." But did it "say" that? Suppose the lion then lies down and rolls over on its back. It starts to snore. Are we still inclined to think that the lion talked? (If so, did he talk to Harry? Did the lion inform Harry it was "going home?" And, thus, was the lion telling a lie?) Can Harry believe his eyes and ears? Has he jumped to conclusions? (See Artificial Intelligence Weirdness. Need categorizing relate to visual cues?)

The basic problem is how we distinguish illusions, visual, as well as audial or other, from realities. Even AI's have this problem.(See Autonomous Car Collides with Bus: an illusion of abstractions?)

And What Are Understandings? Understandings, when articulated in language, are narratives (or, if highly structured, a program) of collection or connection. We may judge them incorrect, false, or incomplete but they are still understandings -- we might say, "misunderstandings."

People may not be at all articulate as to what they understand so we may have to observe how they proceed from their present conditions toward those outcomes that we judge them to be pursuing. If we observe persistent failure to achieve a goal we might reasonably judge that their understanding of how to achieve that goal is deficient.

There is a large number of likely misunderstandings that persist in any population because they need not or cannot often be put to any test whose outcomes enjoy broad consensus as to their pertinence. (So what if kids believe in the Tooth Fairy!) These questionable understandings are often characterized as (empirically) "non-disconfirmable" beliefs. Examples are:
a. The mind is co-extensive with the body, or:
b. The mind is not coextensive with the body.
c. The universe is a hologram; or
d. The universe is not a hologram.

It would appear that any rationales, or chain of rationales, that contain misunderstandings would thereby be severely weakened. (Many pundits who presume to speak for Science share such misunderstandings with those who presume to speak for Religion. See Pseudo-Science: the reasonable constraints of Empiricism.)

The Fractal(?) Generation of Rationales. Understanding in general is the ability to produce chains of behavior or their narratives which link a confronted situation of interest to a goal to be achieved. But, understandings alone are often narrow and give may give no indication of the interests or abilities of the persons who understand and may be expected to act. Thus, understandings may be just part of what we're looking for.

Rationales can be elaborated from understandings. Rationales often bring up systemic concerns about, say, indicator validity (Cue), actor interests (Concern), and actor abilities (Control) which underly attempts at interventions based on narrower technical understandings.

Understandings can be chained together to produce rationales for action, and rationales themselves chained to produce broader understandings. Thus we may go from merely speculative understanding of how to cross a river, to a rationale for expanding commerce across that river, to an understanding of how political influence can be brought about reiteratively by using market rather than, say, armed forces. Understandings developed this way can be connected together, in chains or trees, etc., to articulate, say, foreign economic development policies. (See The Fractalization of Social Enterprise)


Why Can We Understand Human Black Boxes but find It harder to Understand AI Black Boxes?
It is because many people working in AI feel restricted to technical issues of understanding the how's of economically pertinant AI functioning. Despite the the persistent anthropomorphizing of AI output -- and despite the suspicions that the ultimate goal of AI research is to create some kind of homunculus -- issues of rationale, especially in the contexts of human development and learning are discounted as off target. [5]

Autonomous AI stimulates the same kind of misgivings that one might feel about extra-terrestrials. We are not sure we can predict, much less control what they might do with us. It is a non-sequitur to believe that high computational ability equates to altruism. (See METI. Here We Are! Come Eat Us! Our Children Are Especially Tasty!)

We have built-in, so to speak, Cue-identifying abilities inherited through evolution, both physical, mental and social, that are sensitive to the norms of the environments of our development. We are Concerned to share a world with motile, sometimes dangerous, beings that we can hear, feel, smell, taste as well as see. And we know from personal experience what fear, hunger, danger, hate and social attraction are. [6]

We measure the autonomy we are willing to extend to our ancient non-human friends and enemies because we can guess well what drives them, and how to accommodate them to our society. And, not the least important, we can, with more or less success, Control and defend ourselves against them.

To pursue the issues of understanding and rationale, see Intervention. Helping, interfering or just being useless?

Cordially, EGR

(P.S. Check out this interesting interview with Gary Marcus called Making AI More Human.)

NOTES
[1] L Wittgenstein 1958b. Philosophical Investigations. ed. G.E.M. Anscombe and R.Rhees. tr. G.E.M. Anscombe, 2nd edition. Oxford: Blackwell. page 223. See comments by Simon van Rysewyk at Wittgenstein Light

[2] D Castelvecchi Can We Open the Black Box of AI? Nature magazine, October 5, 2016

[3] Sources for pseudo-English concoctions: French - Mots d'Heures: Gousses, Rames, The d'Antin Manuscript. Penguin,1980; German - J Hulme Mörder, Guss Reims. The Gustave Leberwurst Manuscript, Clarkson N Potter, 1981; Spanish -- My confabulation, EGR

[4] IPA, digits indicate tone levels. Cf. H A Gleason, An Introduction to Descriptive Linguistics (New York: Holt, Rinehart and Winston, 1955)

[5] See EG Rozycki Behavior in Measurability and Educational Concerns 

[6] See D Gross, Why Artificial Intelligence Needs Some Emotional Intelligence