Status: Putting out ideas
Last edit: February 2, 2018
I have had these matters in my mind for quite a while, but only when sitting down to write this here essay, I had searched for the opinions and findings of others on the matter, starting with the found wikipedia article about the topic ––– evolutionary aesthetics. Currently this essay is based on my own musings on the subject, previous readings in evolutionary psychology and of miscellaneous gleaned papers. Retrospectively it occurred to me that probably a post or several by Eliezer Yudkoswky about the complexity of values had led me onto this line of thought.
The basic premise in this here essay is that human psychology ––– meaning that which underlines the emergent human behaviour, the activity of the hormone-infused body embedded brain ––– was evolutionary adapted over the millennia in the same way that any mechanistic traits of the body did. My assumption here is that aesthetic experience –– pleasure at perceptual content –– is to a large part evolved under selective pressure, and that there is an inherently rewarding perceptual content. Art pieces and performances would be a give and take between form (an inherently pleasant “perceiveable”) and content (a message conveyed thereby),1 so that a well-rhymed and metered poem would be “artistically superior’ all else being equal to both a text that is informative but lacks any prosodic flare, and a “poem” that communicates nothing but contains meter, rhyming and alliterations.
My question here then is what was it that has given rise to the “inherent pleasures”. My assumption is that they are multiple and independent of each other; that each was a product of an evolutionary adaptation to a different challenge; that the evolution of attraction (a drive towards increased proximity) to certain stimuli had or has conferred ultimately a survival advantage to individuals.
“Attraction”, as well as “repulsion” from “anti-aesthetic” stimuli, is a shorthand for a reactive imperative. In general the former would be an imperative to elongate and/or amplify the stimulus while the latter, contrarily, would be an imperative to terminate or moderate the stimulus. Specifically, attraction can be realized as an imperative to put attention on, or as having a lingering attention on the stimulus being comfortable; an imperative to come closer, to take, to eat, to let in the stimulus. Repulsion can be realized as an imperative to move away, to become closed perceptually (eyes, ears, body); to spit, kick, hit, stop.
While I am fairly certain that there are “multimodal”/ “multisensory” aesthetic phenomena, I will make an arbitrary and simplifying separation and would cover mostly “effects” that occur within the perceptual scope of a single “sense”, or at least in the beginning. Also, while humans vary in both how they perceive and in how they react to said perceptions, I would cover and discuss what seems to me most prevalent according to my own experience of myself and other people, and given published paper that looked into such things, though I’ll mention prominent “exceptions” and common variances.
Taste
Much of what we consider to be part of the taste of food is informed by the nose, that is, by olfactory perception; here I’ll concentrate on that which is detected by the tongue.2
A primary concern of mammals in general and humans in particular is ingesting adequate nutrition to survive, and avoiding the ingestion of harmful substances. Carbohydrates, salts and proteins are necessary, and for their detection humans evolved[^TasteEvolution] the senses of sweetness, saltiness, and some studies suggest that human can also detect fats in food. The sense of umami detects the presence of glutamate which is necessary for the synthesis of proteins. It is non-essential to humans, that is, our bodies can synthesize it, but perhaps it is metabolically “cheaper” to ingest than synthesize, making umami a “good” taste. It’s also possible that the presence of glutamate in foodstuffs has correlated over evolutionary time with the presence of other necessary substances so that the relishing of the former incentivised the ingestion of the latter. In other words, it might be that glutamate is tasty not because it is necessary, but because it is often accompanied by other nutrients that are.
The sense of sourness detects acids in food, which might allude –– at high levels –– to the spoilage of food. The sense of bitterness detects various components, many of which are toxic, and to which aversion is therefore an adequate response.
But some degree of sourness and bitterness in food, generally or just in certain dishes, is enjoyed by people, and many foods that are at first repelling become even attractive after humans gets used to eating them. These foods might repel either through an aversive taste, such as bitterness or sourness, by an aversive appearance (being or seeming “rotten”, for example) or even “conceptually” repelling, such as the Sardinian casu marzu, a cheese that has been partially digested by fly maggots, and which is eaten with the living maggots inside3. We call those foods an “acquired taste”, and I think there’s a very simple process of learning underlying them.
Under the active or passive encouragement of the surrounding social environment, we might try foods or drinks that are distasteful to us, whether they are too bitter, sour, spicy-hot or otherwise unpleasant. However, as these foods are not harmful the way actual “bad foods” are, subsequently we learn that the initial “signal of badness” was misleading, and as these foods are still nutritious (or responsible for a sought-after psychoactive effect, such as alcohol), the “acquiring” person “learns to ignore” the signal, and enjoy it “for what it is”. This phenomenon is the inverse of what happens when people eat a “good food” and subsequently, independently or not, become sick in the stomach. Humans (and other animals) would thereafter avoid that food,4 at least for a while.
Another idea5 is that a food is learned to be enjoyed for the sake of “status”. This is not an alternative idea, and it can be to an extent subsumed: while above a food is learned to be enjoyed despite an initial aversive component in the taste, for the sake of its nutrients (or for status), in the latter case the aversive component is learned to be enjoyed on its own, under the encouragement of status acquisition ––– indeed, spiciness is often something added to other dishes. I find the idea interesting but not particularly persuasive. Chilly, as well as other spices, have antibacterial properties.6 It is plausible, therefore, that the body detects this positive effect and correctly associates it with the spices.
While a bland food can be fixed with saltiness that people intrinsically enjoy, some people would fix it with spicy-hotness which is intrinsically not enjoyable, a pain, while after a taste acquisition it becomes a positive component. This happens even though people seldom eat black pepper or chilli peppers on their own. I think this has to do with the fact that the brain associates all components of a dish as belonging to it, whether any component arises from the “main part” or only from the dressing and spicing. I think that’s what behind the fact that sometimes the taste of mere mayonnaise tastes to me like artichoke, even years after I had last eaten one. As a child I ate artichokes not infrequently, and always with mayonnaise, which might have overwhelmed the subtle taste of the artichoke itself.
The phenomenon of “acquired foods” is a special case, however, of foods that are not enjoyed by some while enjoyed by others, and which repel due to a strong taste component (bitter, salty &c) or some “psychological” reason. There are other foods that some people might dislike which lack such extreme tastes, and I suppose that it has to do with the fact that the enjoyment of foods depends on more than the sum of gustatory (and olfactory) signals. Presumably the sense of taste –– together with the sense of smell –– evolved to evaluate the goodness of food, and would elicit joy from foods that are nutritious and are worthy of ingestion, and displeasure otherwise. Flavour, as opposed to taste (to make a distinction used in the literature) is not something humans have a strong preference for intrinsically. That is, like with any acquired taste, people learn to distinguish and then prefer some foods over others. The enjoyment of a certain flavour by a person at one point in time depends on that person’s history of gustatory experiences, and on her identification and conceptualization of the food. The recombination of flavour components (gustatory and olfactory) of familiar and loved foods in new ways would not necessarily be appreciated –– at least not immediately –– not because somehow the components “do not fit” with each other (salty banana, sweet vegetable soup), but because for the mind this recombination is experienced as a new unfamiliar food which therefore might be bad, and so dealt with with suspicion. Research is solid on the fact that flavour needs its context to be correctly recognized,7 so that a “cherry flavour” tasted without “cherry” as context, or, worse, when tasted from a food that is not at all cherry, would often be unrecognized by tasters. On a personal anecdotal level, I have once received from a friend a package of some cookies he had baked, sent to me via international post. One kind of cookie had cardamom in it, which I wasn’t aware of, and hadn’t eaten cardamom in baked goods before. I found the taste very suspicious, and did not enjoy them so much, more like, “politely tolerated”. I suspected they went bad; I can’t remember anymore whether some of the cookies in the package were visibly (and actually) moldy ––– brought about by the long postal travel––– or whether it is just my suspicion that they were that is now molding my memory. However, later I was informed by him about the component in these cookies which I immediately associated with the strange flavoury element, for I knew how cardamom tasted like “in general”, though it was not something I had frequently eaten, and thenceforward started to verily enjoy these cookies ––– once identified, the cardamom flavour was very much to my taste in cookies.
Therefore, I think, the art of culinary, perhaps like any other art, is not “absolute in its aesthetics”, but rather very much dependent on what the audience is familiar with. The “language of culinary”, however, does not express a thought or an emotion, but the goodness of the food. They say that music is a “universal language”, hinting, of course, that it is not expressed in a particular spoken language. However, we all know that music taste is something that is cultivated, an ear is developed for it and an enjoyment of a certain piece often depends on the familiarity with other musical pieces or some musical movement. Naively one would think that at least it would not be the case with something as basal as eating food, that a well-made dish would be enjoyable to everybody, but this art, too, of culinary, is not quite “universal” and depends on the expectations of the diners.
I find it a curious question, why it is “taste” that is the metaphor for a person’s cultural preferences: one’s taste in books, taste in films and so on. I had been unable at first to find the dating of the beginning of this usage of the word “taste”, nor when the words “consume” and “consumer” were used to denote the acquisition and usage of manmade things (“consume horror movies”) but I had suspected that they were part of the same extended metaphor, and that these usages rose in parallel. Further, I thought that the usage of “taste” preceded that of “consume”, and that the latter was a derivative of the idea of “taste acquisition” (“taste” applied to non-foodstuffs, then the word “consume” is similarly applied).
Later it occurred to me to use “Google books” to browse old books with instances of these words. It seems like I had been partially right, and partially wrong. Based on my inaccurate methodology, it seems like this kind of usage of the word “taste” arose, in English, around 1730. The usage of “consumer” in an economical context preceded this by at least several decades, but the parallel usage of the verb “consume” seems to have risen around the same time as “taste”. Up until the end of the 17th century, the verb appears mostly in religious texts, and mostly to denote things beings consumed by fire, or the corporeal body being consumed. The “new usage” of the verb appeared mostly in the context of things that can be eaten and drunk, such as sugar, it appeared in a more abstracted, statistical/economic sense, such as the ability to consume a land’s increased produce, or increased consumption of wine in association with tariff changes. The transition to an economic completely abstracted-from-eating/drinking context sense happened after the new usage of the word “taste” and seemed to happen gradually and through a rather natural course: a book from 1756 had the verb appearing in relation to consuming wheat (a good that is turned into food but is not eaten directly by humans), 1795 had consuming “taxed articles” and 1799 and 1822 had consuming wool and leather, respectively. It needs to be reminded, however, that while in the 1730’s the word taste was applied already for appreciation of or preference for novels, the verb consume was applied to more “material” goods. Unlike nowadays, books were the only media that could be bought at a shop and carried away, past to a third party and so on, while music and plays were performed, that is, could be appreciated only during the time that their performers were actively making them. Nonetheless, it is interesting that the word “production” was used already in 1688 in the context of theatre, as if a theatre is “productive” in the same way that a farm is, even though the latter yields something material that is literally consumed, disappearing upon its usage as food, while the former “yields” something immaterial, an enaction of a play, that is only consumed in a metaphorical sense.
The mystery is why a word that has to do with tasting is applied for things that are perceived with the senses of vision and audition, and appreciated by the mind. Granted, there are related expressions that use the “proper” perceptive organs: “to have an eye for” and “to have an ear for”. However, these are not alternatives for “to have a taste for” nor for the idea of having a taste. “To have an eye for” refers to a person’s ability to recognize what is pleasing to the eye and arrange visuals accordingly; one does not have a “good eye for film”, even though films are “watched”. Similarly, “having an ear for” can be used for music, when one is good at breaking down a complex auditory arrangement down to its components, regardless of how enjoyable one finds it, and for languages, when one is good at learning them. “Having a taste for”, however, indicates enjoyment of a certain “artifact”, such as a taste for a certain kind of movies.
I think this notion is related to the idea of “acquired taste”. A gastronomic taste is acquired through habituation, through work, as one needs to suffer through palatine unpleasantness. Similarly, today, as in 1730, there’s a notion that taste can be good or bad, and that a good taste is associated with cultivation, with education. Something is “distasteful” if it’s crass, that is, unpolished, raw as a base human is raw. A person’s taste is something that that person cultivates, among other means through education. A work of “good taste” is usually subtle such that one needs to train a certain acuity of differentiation to properly understand it and therefore appreciate the work. I was disappointed that in Google’s ngram the combination “acquired taste” started appearing only during the 20th century, but “cultivate” did monotonically increase since the 1670’s to a peak a century later ––– it probably has to do with the so called “second agricultural revolution”, and most of the hits for the verb had to do with cultivation of the land and of trees, but also of alliances, friendship, the arts, good understanding, and, as early as 1688, in J.C. Aggarwal’s Basic school organisation, the cultivation of “knowledge of and appreciation of the beautiful in nature and in art”, on page 117. My idea that “cultivation” might be a precursor of “acquired taste” ––– equivalent in meaning and metaphoric conceptualization although differing in the particular words employed ––– is supported by the usage of the verb in the same book on page 89 (the emphasis is mine),
Careful presentation of well-chosen dramatic productions help to teach the students to spend their leisure time more usefully, cultivate dramatic tastes and intelligent discrimination.
This is a whimsical speculation, but I think that perhaps this notion of cultivated taste came about due to the state of trade during that time. Around the mid-17th century coffee, tea and chocolate were introduced in England. A sudden and steep rise in the appearance of these in literature appears only a hundred years later, perhaps indicative of the established prevalence of these. What unites these three goods is that they are all (1) bitter and (2) emerged at similar time as imports to England. Because they are bitter, a taste for them must be acquired. Because they “suddenly” appeared in markets, people initially acquired the taste for them as adults since they were introduced to them not by their parents or nannies when they were preteens, but from their local East Indies merchant, putting this acquisition of taste in a much more vivid memory, and the process in conscious cognition. Lastly, the three of them were initially luxury goods, enjoyed mostly by the rich. That meant, of course, that only people who could consistently acquire them could acquire the taste for them. And so, altogether, at the end, we have a state in which people of high class enjoy(!) the taste of luxurious drinks such as coffee and chocolate, while if you give a cobbler a cuppa in a blind test he’d probably spit his first mouthful, thinking you offer him mud dissolved in water. I suppose there might had been a language to describe the superior discernment of the higher classes before that (or had there been? Perhaps a fine discernment became a virtue only during the Enlightenment, a contemporary of the novel coffee and chocolate imports), but that the taste metaphor was apt and lent itself for this kind of expression.
There’s another phenomenon that at least uses metaphors of taste and which might or might not allude to a certain “evolutionary mental heuristic”: given that evolution makes increment improvements, it might often result in, say, a behavioural configuration that is effective even if not “optimal” from some design perspective.
There’s cuteness. Creatures who are regarded as cute typically have features and/or behaviour that is infantile or displays powerlessness, or both. One can imagine that a baby can have a sour face and exhibit an “aggressive behaviour”, or a tiger pup that will playfully bite, and both would seem cute despite the ostensibly aggressive behaviour, because they seem to either lack the power to harm or the full intent to do so. Only once they cause significant pain or real destruction would there be an immediate transformation of the way they are regarded.
Cuteness elicits several reactions. Other than the elicitation of affection, cuteness exerts an actual imperative (imperative to act) which, indeed, I think, differentiates it from beauty and other affection elicitors. First, it evokes an urge to touch, grasp, embrace. Often this urge is frustrated (for example, if viewing cute things on youtube…)and instead of being consummated (by embracing the cute thing) it is satisfied through an ersatz, an inward motion of the hands and the squeezing of the arm muscles which simulates the pressure of an embrace.
Second, cuteness elicits from people expressions of wanting to eat (“You’re so cute I want to eat you”), and supposedly, a genuine urge to do so. Many years ago, as a school boy, I was at home (either holidays or I was sick). I took out a lettuce head from the fridge to eat, but found a chubby green caterpillar on it. I adopted it as a pet, “playing” with it, watching it crawl on the table or on my leg. I remember a moment of feeling how cute it was, “so cute I wanted to eat it”. It wasn’t some metaphor: I genuinely considered for a few moments whether I should eat it or not. It looked nourishing, plump, juicy…
I have different thoughts wherefrom comes this tendency; this whole essay is quite speculative but here I feel like they are really missing their mark. Further, I have a sense that there are several confounds here, that is, that evolutionarily there were several things that pressured towards the emergence of an instinct to eat/bite/nibble conspecifics and other creatures. Nonetheless, I shall list them here (and return to “taste” at the end).
First, I thought that given the carnivorous omnivores that humans have been, they would evolve an instinct that would lead them to eat the right things. Perhaps animals that were not so dangerous looking (i.e., cute) elicited an appropriate urge to eat them, while more dangerous animals did not elicit such a response and were appropriately avoided. Maybe even plants that seem cute were the ones that are good to eat — round, plump fruits and vegetables (searching “kawaii” in a search engine yields quite a few images of food articles with faces). But this seems very certainly wrong to me: first, even if cute babies are a superstimulus of a cute animal, even if they elicit an urge that is interpreted as an urge “to eat them” (for I think the verbal expression is uttered in lieu of following an actual imperative) they certainly don’t evoke a real response of devouring. After all, if that was the case we would hear about mothers and aunts who eat babies more often. Though cannibalism or even children-eating does occur, it occurs mostly in cases of extreme famine8 and therefore abnormal. On the other hand, people generally curtail their urges, and many are well-fed enough that they will not expand their diet by including their cute pets, let alone their children.9 On the first hand, again, it is precisely the cute animals that humans don’t eat; the domestication led enhanced cuteness has been a result of a selection for companionship, not for livestock. While dogs are cats are eaten in some parts of the world, it seems like their adoption as pet animals (the adoption of the habit of adopting) is correlated with a rising opposition to their consumption of food ––– though, again, extreme cases of famine notwithstanding. Cuteness seems to elicit protection, not hunger. So it seems like this line of thought might be off, or just more complicated.
Looking at other animals might be helpful in trying to disentangle this. Dogs ––– as well as wolves “before” them ––– lick each other; effectively they are grooming each other’s fur, doing for each other, as a social animal, what, say, the more solitary cat does to itself, while at the same time forming bonds. Apes such as chimpanzees and baboons also groom each other using the mouth, albeit differently: they pick each other’s skin and look for latching insects, which they eat. Is the human behaviour analogous? Perhaps there are no more ticks adhering to our skin, nor are we covered in fur that needs tending, but it is possible that an intrinsic inclination survived not constituting an evolutionary disadvantage.
On the one hand, the grooming the animal gives to the groomed is usually a sign of submission (very much like, within humans, giving oral sex is), while the relationship between the cute thing and its human admirer (the “groomer”) is ostensibly the reverse. On the other hand, children and generally the young are often the recipients of unconditional care, among humans as well as animals, such as the cat mother who licks her kittens. Humans kiss out of affection, and that is reminiscent of mouth to skin gestures that animals make out of affection. Also, although humans also embrace in affection, including cats that more often than not look like victims rather than beneficiaries of love, they also fondle and stoke, not only pets but also small children, that is, make an action that is directed at the surface of the skin rather than the volume of the body. Is it possible that when an aunt squeezes a child’s cheek she, unknowingly, removes a phantom tick? Perhaps the skin oriented pinching and kissing arises from a different evolutionary motivation than that which is behind hugging.
I can’t vouch for any of the above too strongly, in a way they all seem as likely as my initial idea that there’s a simple confound in the mind. Nourishment and keeping offspring safe are two things that have always been important for survival. To nourish one must first approximate (bring closer) food, and to keep the helpless babies safe one had to keep them near, even at one’s arms. If we assume a modular mind (which I am) and give that these two are modules that solve a problem by physical approximation, then it’s possible that a very strong “approximate me” signal given off by particularly baby-like things (“cute things”) elicit a “side-effect response” from the other approximation system that takes care of nourishment and which chains two actions sequentially: first bring the food close to you, then bite and eat it.
For the sake of completeness I’ll mention also biting as it is used in intercourse. Like with felids where the mounting male bites the female upon copulation, biting in human sex is a kind of a domineering gesture, which in addition is a kind of signal of loss of control, also encapsulated in the gesture of “lip biting” where the biter suggests that they can barely control themselves. I thought I had the memory, at the very least I had the vague notion, that apes tend to be rather aggressive in sex, including biting, which made me think that biting might be instinctual in humans, or at least amongst some, but upon a not very thorough search I didn’t find any references for such a behaviour among simians.
There’s a gustatory metaphor applied to humans which might or might not be related to the “so cute I want to eat you” instinct/ impression/ expression. Tastes are applied to describe people. The most common one, and the most linked to the topic above, is sweetness. A “sweet person” is a person who, first of all, is innoxious, like a baby. Then it is extended to describe people who are kind, particularly when unexpectedly so. That is, one expects something neutral, or even harmful, but receives from a person something unselfish and helpful. In English this usage, or at least the application of “sweet” to describe people, goes back as far as books on google books go, namely as far back as 1607, where Thomas Heywood’s “The Fayre Mayde of the Exchange” has the expression “sweet wench”. The same usage of “sweet” is extant also in Hebrew, as well as in German, though in the latter language it is used more to describe cute rather than kind people. In Russian, when applied to people, “sweet” is used only to describe that which is very cute, mostly babies. At least without considerable research it would be hard to say whether this usage was burrowed from one language to another or arose spontaneously within more than one of them. There might be a lazy inclination to go with the null hypothesis that it was simply human creativity, as used to create metaphors in poetry, that coined such an articulation that has no roots in some complicated tangled mental glitches. However, the employment of other words of taste, which are used differentially in different languages, suggested, at least to me, that something in the human mind is systematically set to produce such a language, that it is not the result of an afflatus.
A slight variation on the above is the idea that tastes are associated with a certain reaction to the world such that if something elicits that reaction, we associate it with the taste that elicits an analogous palatine reaction. Nourishment is important to all life forms, to all humans, and our relation to food is central; enjoying food (or not) is an everyday activity, the need to satisfy our hunger impinges on us daily, and people must and do plan their days around this need. Perhaps then our names for different taste-phenomena are extended to describe our matching reactions, and further used to describe all analogous reactions?
It is true that sweetness too can cloy, but generally it is an attractive taste. Therefore, anything that is attractive or pleasing can evoke by an association the sense of “being sweet”: sweet talk, dreams and less analogously, sweet spot.
In English one does not commonly refer to somebody as “bitter”, but it is indeed possible to call someone “a bitter person” if they are grumpy and dissatisfied with the world. A person like this is probably a not a very pleasant company, but I suppose it came about the other way via expressions such as “bitter about”, such that the person who “experiences the bitterness” is not the company of the describee but the describee themselves. This is the case in Hebrew as well, where an “embittered person” would be a person who cannot enjoy anything in life. In Mandarin, on the other hand, a bitter person is someone who is cruel, that is, someone who provokes unpleasantness in others (like the bitter taste).
Also in Mandarin, sourness is the effect made by something that makes one uncomfortable in a sad way, such as a bagger sleeping in winter on the street or a war tragedy movie ––– that make one’s heart sour.
As it used by adults, sticking a tongue out expresses mischief, but I think it is so only because it emulates children; as “genuinely” used by children this –– primitive? –– expression conveys enmity. Spitting out/ sticking a tongue out is what children, as well as adults, do, when they put in their mouth something with an offensive taste, and it seems this gesture is extended by people to anything they find repulsive ––– tongue sticking by children, and spitting by adults. This is also the case, I think, with the more mitigated expressions of pouting and of having a “sour face”, expressions against things one find distasteful.
There are two phases of disgust reaction, so to speak. The one mentioned above would be the “second”: after a disgusting agent has reached our senses or our bodies, as something that we ate or bit into, there’s an effort to dispel it via tongue-sticking, spitting or even throwing up. However, if one is merely at the presence of an agent of disgust, for example when one stands in a room with a very strong foul stench, the opposite measures are taken: the person closes up, sealing their mouths and holding their breath. I think this is the analogy at play at the expression “he is sour about X”: when someone has a bad mood, indignant about something and expresses their bad opinion about it, then they are bitter. However, if they are simply moody but refuse to talk about it, they are more likely to be “sour”: they hold their indignant thoughts and words to themselves, as if they are holding their breath.
In Hebrew “salted” (Memulakh) means “cunning, shrewed”, while in Mandarin salty (xian) means perverted; I can explain none of these. Another adjective for cunning and witty in Hebrew is “spicy” (harif), but I think it is likely to come from Russian where the same word is used for “sharp” (also used to denote “clever”, остроумный – “sharp minded”, the reverse of a “dull person”) and “spicy”.
There’s an obvious complication to this whole matter, stemming from the very fact which could allow for this kind of language to arise: since humans do not taste each other, taste description words when applied to others can assume arbitrary significance. Perhaps there was a logic to the usage of words within the story that first applied “salty” or “bitter” to some person, but then the word propagated out of that context, a context which upheld the congruence of the adjective within a larger framework and without which the adjective is “merely arbitrary”. An easy parallel is colour adjectives. In English “blue” is sad; “green” is inexperienced while in Spanish it’s naughty; in Hebrew “yellow” means someone who strictly conforms to regulations, and so on. Probably when these expressions were first used the colour was not randomly chosen, but as adjectives describing people the “logic” is not carried with the words themselves and therefore is lost.
If we describe people as looking good/ bad (“beautiful”, “ugly”) then we comment on the aesthetic impression their appearance makes, and we compare them to other people along the defined dimension of outward appearance. So is a description of how a person “sounds like” which refers to their qualities as manifested through speech, either describing the impression made by a direct verbal description by a third party (-“she did x and y” -“oh, she sounds brave/ kind/ sweet/ nice/ patriotic”) or the impression a person’s own words indirectly make of him/herself (“she (her speech) sounds crazy/ patriotic/ brave/ like a communist”). People do also smell each other, and a description of how a person smells like (good/ bad) would be literal. People touch and therefore feel each other, but mostly when describing how another person feels like, the “feel” used is the reflexive rather than the intransitive verb,10 that is, how the person feels themselves (“I feel/ she feels well/ sick/ cheerful”). With very particular exceptions aside, people don’t literally taste each other and therefore comments about other people’s tastes (sweet, bitter, salty) would be first of all taken metaphorically, and then literally/ sexually like most verbal expressions that don’t make immediate sense and therefore present themselves as innuendos. The only way to ascribe taste-description usage applied to people to mental-modules’ “leakage” out of their original function ––– cognitive structures handling with food taking non-food as objects ––– as I tried to above would be through a systematic inspection of languages and the search for similar occurrences that were not borrowing from each other.
Smell
Smell helps humans to assess the goodness of food even before they put it in their mouths. Rotten food stinks and would elicit an aversive response, while good, “sweet-smelling” food would be relished.
Another domain where “aesthetics of smell” are relevant is that of assessment of others, namely, of potential mates. Body odour might serve as cue to such aspects as developmental stability and hormonal constitution, a suggested studies that found women’s preference, but only for non-singles during their peak fertility menstrual cycle period,11 for odours of more face-symmetric men as well as more narcissistic men (based on questionnaire based scale)12 (the notion of odour being able to communicate immunocompatability and with it attractiveness seems to be wrong), as well as studies that found that men find the odour of ovulating women more pleasant.13 This is particularly the case for women regarding odour of men during ovulation, while men find the odour of ovulating women to be more attractive than that of women in other phases of the menstrual cycle. In either case, the appreciation of these cues is facilitating of reproduction by (1) facilitating conception and (2) selecting a mate with perceived “good genes”.
Touch
There are tactile sensations that are pleasant and unpleasant. As far as textures go, whether sensed by the hands or the body, hard and ragged surfaces are unpleasant, while soft and fluffy ones are pleasant. This is reasonable: the former exert dispersed and localized pressure that might lead to injury, while the latter, even if covering a harder surface beneath, spreads pressure on a wider surface of the skin, reducing the maximal pressure at any given point and, therefore, reduces the signal of mechanical nociceptors that discourage the individual from inflicting damage on oneself.14 It may seem unclear, however, why would a soft fabric not be simply not-unpleasant but inherently pleasant such that people might be attracted to it, wishing to bring it closer to oneself, for example by pressing it to their cheek, or keeping a security blanket or a teddy-bear as children, or adults. I could hypothetically attribute it to two things. First, human being are themselves soft, and as far as valuable bond-forming goes, it might be that this attraction to softness is meant to bring individuals together, for example mother and child, close kin or friends. Second, one might presume that “in the wild” soft material is scarce. If such stimuli were perceived merely as “not-unpleasant”, or neutral, an individual would “take it or leave it”, meaning they would “leave it” since carrying things around is bothersome. However, if this material is inherently pleasant to the person, that is, his “brain rewards him” for touching it, then it might be carried around and thus become available in a future situation when it could become of use, for example to pad a hard surface to sit or lay on. The reward inherent to soft texture therefore solves a logistic problem. Had such material been widely available, its obtainment would not have been a difficulty. It is not the case, however, and the inherent reward solves a problem that could be otherwise solved through forethought and planning through simpler cognitive means. This, of course, could be said about all inherent rewards. Obviously a vertebrate with no rewards whatsoever not only go extinct, but would not even survive for more than a few days. Some reward systems are oriented very directly towards existential prolongation, such as the joy of eating when hungry and of mating, but others do that in an environment dependable way, meaning they are effective only under certain circumstances.15
Another unpleasant sensation is “itch”, presumably meant to elicit humans and other vertebrates to quickly respond to an insect landing or crawling on their skin by scratching the area. If the sensation was pleasant one would have learned to not scratch and instead indulge in the sensation. There are fabrics with a prickly texture which impinge on the skin in a manner that is imitative of the movement of insects, at least at the gross resolution of the skin’s tacktile sensation, and which therefore elicit itchiness and unpleasantness.
I think that there’s an inherently rewarding experience which underlines such actions people take ––– perhaps, as adults, particularly when they are not so engaged with their external environment –– such as boredly tapping their fingers on a table, fiddling or gyrating little objects, twisting the coil of a phone cord (in the old days), scribbling on a piece of paper during class or a conversation and so on.16 I think it’s the same one which makes it pleasurable for toddlers and kids to play with “geometrical” toys.17 My hypothesis goes like this:
There’s no contention about the fact that tool use was advantageous both to individual humans as well as to the species as a whole.18 Knowledge of specific tool use is transferred from an individual to another through observation.19 But if so, where did this knowledge originate? Nowadays there’s a lot of intention going into innovation, so one might imagine that it is forethought which has led to most inventions: scientists, engineers, artists and so on puckering their foreheads and doing their things until they came up with something new. But we are also aware that discovery, which goes hand in hand with invention, is often, if not stereotypically so, incidental. Indeed, the most recognized exclamation of discovery is “eureka”, which originates in the anecdote of Archimedes realizing a mechanical principle at the act of easing himself into a bath. While less directly exemplary of the point I want to convey, also the anecdote about Newton and the apple has the scientist in interaction with the world around him,20 that is, the act of invention was prompted by some event (bath taking, apple falling), not emerging spontaneously in the inventor’s mind after that decided to engage in invention. 21
Let us imagine a hypothetical ancient (biologically modern-)human society that did not develop tool use. How will it develop it? Not through reasoning alone. As stones don’t spontaneously fall on nuts, nor do pointed sticks fling themselves against the bodies of deer, I think there’s nothing “reasonable” in the intentional invention of simple hammers or spears, that there’s nothing that would “a priori” lead humans to believe that the application of such tools would be advantageous, e.g., by rendering exposed the edible part of a nut or by incapacitating a living and running prey animal. Imagine how many steps there are to reach a stone knife. First, as potential inspiration, sharp stones are not a very common occurrence in nature. Second, as stones are hard it is not obvious that they can be broken into pieces. Third, it is not obvious that any stone “contains” within itself a sharp stone (in the same sense that a marble block contained Michelangelo’s David). Fourth, it is not obvious that a sharp stone could be used to cut open an animal’s hide, or anything at all (and/or that there’s an association between how well an object is thin and hard or how painful it is to press it against one’s arm, and how well it can cut other things). With the natural environment left alone to observation only, the phenomena that could hint at any of the above are rather rare.22
Therefore, I am of the opinion that the invention of tools was assisted by an evolved drive to grab, take hold of and to generally manipulate and interact with objects in the environment, which led to opportunities to discover their properties and potential use, opportunities that even curiosity would not have brought about as there is little mystery that a boulder, a tree, plant seeds or a flowing rivulet could hold, especially if one had been living among them from one’s very beginning. Or, looked at from the other side, that people’s absent-minded fiddling is an evolved inclination that had conferred an adaptive advantage as a jumping board towards tool use ––– and possibly still does. Stronger drives to fiddle led to more exploration, exploration led to more discovery, and discovery led to better survival via the utilization of tools. Technology propagates via observation and progresses via accidental drives-driven discoveries. While squirrels have evolved fixed action patterns that lead them to bury nuts in the ground and thereby have food reserves for the winter, and birds evolved such patterns that lead them to assemble a nest from twigs and thereby have a safe spot for their eggs and offspring,[^VacuumActivity] so humans evolved fixed action patterns that make them fiddle and play with objects around them and thereby come to familiarize themselves with them and their instrumental potential.
[^VacuumActivity]: As far as therse behaviours have “content”: as specific as the nuts and twigs or as general as “food”, “shelter making” or as abstract as “caching food for winter” or “preparing a nest for future offspring”, these here behaviour are so called “hard coded”: they are innate rather than being learned during an individual’s lifetime. The Wikipedia page of “vacuum activity” has several interesting examples of such behaviours being executed by animals at the absence of the obvious “natural objects” of the behaviour, such as birds snapping at the empty air as if edible insects were there.
Many cat owners undoubtedly were exposed to the self-driven (rather than “conscious” goal-oriented) nature of fixed actions patterns. Cats who use a litter box would at times step out of its little frame, perhaps to get a better angle at their fresh production, and then would sweep at the hard floor as if they are heaping sand; then they would give an assessive sniff to judge the coverage, sweep some more at the floor, and eventually would leave the scene with a sub-par performance, satisfied enough with their blank-sweeping.
To illustrate the decoupling of reasoning and behaviour –– the gap between our innate motivation to do something and our understanding of the purpose of the action –– I’ll bring a “complementary” example. Many people with an obsessive-compulsive disorder23 “know” that their behaviour is not “rational”, yet they feel “compelled” to enact it, a compulsion that might be accompanied with worrying thoughts. “Know” and “compelled” are figures of speech which assume a dividuality of the mind, that is, its non-unity. But the brain that knows and the brain that compels is the same brain. One can act without knowing why it might be “good” (since the behaviour was “designed” by evolution rather than the individual), and here, we can see, one can act even despite knowing why it is “bad”. Of course the same also holds with regards to people with undesired addiction, though this outcome is brought about through the use of psychoactive substances, that is, external agents that directly influence the operation of the brain ––– and therefore the mind ––– rather than being a “cognitive conflict” stemming from inherent functions.
There’s one example from the realm of non-human animals of such a bootstrapping ––– an evolution of an innate inherent drive for one thing that is designed to lead the way towards learning a behaviour that is by itself advantageous ––– which I have encountered at a lecture of Robert Sapolsky.[^SapolskyChick] He mentions chicks and their seemingly innate ability to peck at grubs. However, Sapolsky says, it seems like the heritable trait is the tendency to peck at one’s own feet. Presumably, despite their impulse to do it, this behaviour is not particularly rewarding, but sloppily the chicks learn that pecking at grubs –– who visually looks very much like their feet, and are also at the ground –– yields a yummy meal that satisfies hunger. Sapolsky mentions a study where newborn chicks had their feet covered, and these did not peck at grubs either. That is, says Sapolsky –– I’m paraphrasing –– the end goal of making the individual creature peck at worms for food is achieved through an evolve impulse to do something else, namely, to peck at one’s feet with a combination of learning ability (pecking one’s feet -> hurtful, negative reinforcement of behaviour, accidental pecking of similarly looking worms -> rewarding, reinforcement of behaviour). It’s possible that the impulse to peck their feet is greater when they are hungrier, but of course this behaviour per se would not alleviate the hunger; and when eventually they peck at mealworms it is not because they “know” that it would satisfy their hunger, but out of a pure visuomotor instinct –––– it is only after they experienced this satisfaction by accident, indeed, after discovering the satiating effect of pecking at worms, that they learn to engage at this activity when they are hungry.24
. The way he presented the topic is by saying that chicks have an innate behaviour compromising of pecking their own feet. This kind of pecking is not particularly rewarding, but they soon learn that a slight alteration of the behaviour turns in a reward: mealworm larvae are visually similar to the chicks’ feet, and pecking at them instead produces a meal. Covering their feet at a young age before they commence with this kind of behaviour prevents them from pecking at worms. In other words, the behaviour of pecking for mealworms is bootstrapped by an instinct to peck at one’s feet; they don’t peck at worms because they know that they are hungry and this is a way to obtain food. They peck at them out of a visualomotor instinct, and later learn that it can satisfy their hunger.
I found only one paper that described such an experiment, presumably the study that Sapolsky refers to.25 In the paper a somewhat different interpretation was offered than Sapolksky’s. Wallman’s interpretation of his experiment’s results was that the presence of the visual stimuli (the chicks’ feet) was responsible for the chick’s accommodation to the visual stimulus of a mealworm. When they had been deprived of the vision of their similarly looking feet, they found the mealworms a curious visual phenomenon, and instead of pecking them to pick them up and eat them they mostly fixated at them, pecked at them explanatorily or simply turning around from them. It also suggests that had the chick been raised in a stimulus-rich environment (the experimentees were raised inside a small cardboard cylinder “house”), the effect of this deprivation would be lessened.
While Wallman also seems to suggest the feet as being helpful for preparing the chicks for engagement at the adaptive behaviour of eating mealworms, I think he misses some crucial points. First, I’ll say that though I have not read about this at the academic literature, I’d assume that chicks indeed peck at their feet, if only due to the fact that chicken do peck their own (and each other’s) feet (funnily enough, I found one post in a message board where somebody was wondering why her newly acquired chickens were pecking their feet, and speculated they might confuse them with worms). While Wallman is probably right about the fact that the mealworms were a novel of a stimulus (and therefore, to add to his words, a potentially dangerous one) to a chick deprived of ever seeing its own feet, I think he ascribes too much responsibility, as it were, to the visual system. That is, it seems that he is of the opinion that the only thing that stopped the chicks from eating the mealworms were their visual unfamiliarity, as opposed to, for example, the fact that chicks are not born knowing “how foods looks like”; it is not obvious that chicks are supposed to peck at worms. In other words, what seems to me to be crucial is the feet’s provision of an imperative to peck at them (and subsequently, at other similarly looking things) rather than their provision of an accommodation to a visual pattern shared also by potential prey.
A question that could be asked is whether, indeed, the feet –– their appearance at the bottom of the chicks’ –– are really necessary. Presumably if the chicks have an impulse to peck at the appearance of their feet, they would have also pecked at a mealworm on the floor even if they had never seen their own feet. The assumption being that if there’s an innate visual schema of that which the chick is to peck at, and the feet match that schema visually, and , in turn, mealworms resemble the feet visually, then the mealworms also resemble the scheme and therefore there is no need for the intermediary feet. That being said, perhaps in this transitive chain each step accumulates some difference, and the mealworm are different enough from the original schema that the chicks must first learn that their feet is the object they should be pecking, after which the mealworms, as well, suddenly look similar enough.
But this could have been taken a step further, as it were. Another question is whether the impulse to peck is directed at a visual pattern (elongated, light, segmented) or is a proprioceptive imperative that then takes part at an imprinting process: initially the chicks are guided only through proprioception to peck at their feet, but they learn the form of the visual stimulus of what they are feeling an impulse to peck (their feet) and later they start pecking at things that only resemble that stimulus visually (mealworm; note, however, that from a proprioceptive perspective as well, these worms when pecked at being close and on the ground, resemble the feet). Unfortunately the Wallman paper did not mention self-pecking at all, and one wonders whether the chicks who had their feet covered still pecked at them, which would suggest that what guides this initial behaviour is not visual. One could then go a step further in experimentation and paint the chick’s legs in different patterns and place them in a space with worms of different patterns and see whether there’s a tight correlation between how the chicks’ feet looked like and the worms they preferred to peck.
The alternative, that the innate schema is visual, doesn’t make sense to me. First, there’s a redundancy, having the schema of a worm twice, once as a cognitive visual schema and a second time in the shape of one’s feet. This could be an accident (the similarity of the feet to mealworm) but then the feet were merely collateral-victims as the self-pecking would be an undesired side-effect rather than a crucial component, and covering chicks’ feet should not have affected their behaviour towards mealworm.
Another example can be taken from humans. Even congenitally deaf infants would start babbling, uttering “quasi vowels”, like their hearing counterparts, within few months of age. Auditory input is needed for such babbling to develop into the syllables out of which words can be formed, but the utterance itself is innately motivated, “independent of environment”. Like with tool-use, we have a kind of “bootstrapping”, in this case for the sake of language development. It is not the case that infants come to understand that vocal utterances can be employed in order to communicate with other individuals, or to somehow manipulate the environment to one’s advantage, and then decide to learn this wonderful tool called language and start practicing by forming syllables. Rather, utterances are made instinctually, over time the infant learns how to produce them in a way that imitates other humans in the environment (probably also an innately motivated behaviour) and yet further on the child start recognizing that these utterances are formed by other at a certain pattern, and that different utterances have different effects (prompt different reactions from others) –– that is, the child starts recognizing semantic content within these utterances, and stats employing them for their causes.
So, again, evolution formed innate behaviour that presents to the individual an experience on which discernment is applied and thus leads to learning of useful skills. The hands want to grab and lead the individual to learn tool-use, while the tongue wants to make noises and it leads the individual to learn language-use. Evolved behaviour provides the clay while the (also evolved) brain shapes it into pottery. Without the clay the individual would have had to be a genius struck with a great insight to realize –– out of the blue, as it were –– that one could use materials26 in the environment to shape other materials and to use them for one’s purposes, or that one could employ a variously modulated voice in order to coordinate behaviour with other individuals.
Another obvious domain of rewarding tactile perception concerns the stimulation of erogenous zones. Their erogenousity is easily explained from an evolutionary point of view. The drive to “use” the genitals, perceived as a sexual drive or “horniness”, facilitates humans to engage in reproductive behaviour. I think this “prospective” drive is distinguishable from the erogenousity (“retrospective/immediate” reward) itself of the involved organs, where things can get a bit more complicated. With the penis things seem rather straightforward: the pleasant feeling of the stimulation of the shaft via intercourse facilitates impregnation, while the post-ejaculatory refractory period during which further stimulation tends to be unpleasant restricts the act from going on longer than necessary, possibly also preventing the glans penis, thought to be shaped so in order to scoop “competitive sperm”, from removing its own semen. Outside the context of intercourse, this rewarding stimulation encourages ejaculation via masturbation which “refreshens” the semen and therefore improves the quality of the next ejaculation.
However, reviewing published papers, it seems like things are a bit more elusive regarding the female genitalia. For example, it seems to not have been firmly established whether “the G-spot exists”, and whether vaginal and clitoral orgasms are distinct. My opinion is as follows:
The clitoris’s function is to induce orgasm, and the function of orgasms is to “exercise” the muscles that are required for a successful birth. Several questions might follow. First, why is it necessary to have an external “trigger” for orgasmic contractions, instead of having the occurring spontaneously? Women do experience spontaneous orgasms at night (like men and nocturnal emissions, beneficial for the same reason masturbation is), as well as, to rely on anecdotal accounts, during an awake state during pregnancy, and unrelated to any behaviour or thoughts. Braxton Hicks contractions, experienced during pregnancy, are believed to be preparations of the body to the imminent birth. They also happen after orgasms during pregnancy, a time when many/some women report elevated libido, even when orgasms are painful and more unpleasant than pleasant. To me it seems like orgasms and Braxton Hicks contractions are in some sense qualitatively almost identical, but different quantitatively. If orgasms are light exercises throughout life, then the body starts with more intensive preparations towards the marathon that a birth is. Second, one might ask why “exercise” at all? Generally speaking, what causes muscular growth is not exercise per se, but rather the body’s response to it. This “responsiveness” is an efficient way to handle “anabolic logistics” in an environment of scarce resources where growing and maintaining an unused or under-used muscle is wasteful. However, the muscles necessary for birth-giving can be “assumed” to be used in the future whether they have been used or not: having a child is the only way to propagate your genes27, and if you don’t propagate your genes you are out-selected; if you don’t get pregnant then, evolutionary speaking, whatever energy you have saved by not maintaining these unused pregnancy muscles are meaningless as you have no progeny. Therefore, it comes to reason that the body could have “maintained” the muscles, perhaps even more efficiently, without an external input, such as contractions.28 I can several things to say to it. First, I am far from being an expert on the subject, and it might well be that other than providing signal to the body, exercise, via utilization of the muscles, is beneficial for the health of the muscles by itself. Second, as stated above, evolution does not work by designing our physiology (no “intelligent design”: consider the vertebrates’ eyes’ blind spot). As the clitoris is homologous to the penis whose physical stimulation’s pleasantness is necessitated for a successful conception, it’s possible that it “burrowed” this pleasure for the sake of vaginal and uterine exercise; that is, a skeletal-muscles-like exercise-motivated growth with a pleasure-motivated tendency to exercise the muscles (through spasms caused by orgasms, which in men facilitate ejaculation) made it unnecessary for the body to “spontaneously” grow the pelvic muscles during pregnancy. Not fulfilling any task otherwise, the clitoris shrunk to the size of a toggle switch. Any individuals who had lost the clitoris completely had to reinvent some other pathway to conduct these exercises (in this whole discussion the assumption is that the relationship between exercise and muscle growth was “already set” at the relevant evolutionary times).29
Besides being used for reflexive-stimulation, I suspect the clitoris might have had an adaptive role in the evolution of monogamy, or in general of some sort of social relations, in virtue of its accessibility to stimulation by others. Bonobo females, for example, often engage in frontal mutual genital rubbing;30 whatever “function” it fulfils, the act itself is clearly social. However, this topic broadens up way further from the intended scope of the hereby investigation. Perhaps I’ll leave it to another time.
There is some debate in the literature whether the G-spot “exists” and/or whether vaginal orgasm is distinct from clitoral orgasm or is simply brought about via an indirect stimulation of the clitoris (which is a bigger organ –– they all like to cite –– than the small exposed pip). What can definitely be said is that the anterior part of the vulva is strongly innervated, and that the somatosensory signal thence is differentiable from sensation of the clitoris. Whether one attributes it to the “greater clitoris” or not seems to me completely irrelevant.
Part of the mystery stems from –– I believe ––– the investigation into the matter being completely misguided. I believe that the female orgasm as an adaptation did not evolve to address an issue of sexual intercourse. It’s common knowledge that many women cannot achieve orgasm via penile-vaginal sex alone, and require additional clitoral stimulation. As far as “vaginal/ G-spot orgasm” goes, I do think that it’s a separate phenomenon (for example, it but not clitoral orgasm elicits “squirting”31), though it’s rather immaterial as far as my thoughts regarding the subject go. Neither the clitoris nor the G-spot, nor orgasm in general, is there to either “reward” copulation or motivate it.32 Clitoris is not quite at the right place if it was meant to be stimulated during penile-vaginal intercourse, not does the G-spot seems to be particularly well suited for the male anatomy to stimulate it enough.
What I think the female orgasm is meant to facilitate is childbirth. Unlike the male penis, a child’s head is more than wide enough to stimulate and push against the walls of the anterior vagina. In addition, all the responses of the orgasm seem to be conducive to delivery but unnecessary or even inimical to conception. First, the orgasm elicits contractions, including that of the uterus, which would facilitate the movement of a fetus out to the world. I’ve seen it being mentioned in the literature, hypothetically, that these contractions help sperm swim in the uterus towards the fallopian tubes, though a research has found no correlation between fertility and orgasming in women.33 In addition, and I suggest it only tentatively, squirting, being a strong gush of fluids coming outwards from just above the vaginal opening, might be helpful in pushing the baby whose head would be covering the urethral orifice. Second, the female orgasm is extremely analgesic, contrary to its male counterpart. I think this speaks for itself. While pain is very helpful in turning humans and other living beings away from harmful stimuli, the necessity of birth, during which a human head of a significant size must pass through the vulva, makes associated pain not only unhelpful but potentially dangerous; the experience of pain affects behaviour, it makes us withdraw and/or press against the hurtful, which in a case of a painful birth could manifest in pressing against the baby, virtually preventing it from coming out.34 Having pain switched off at the moment of “fetal expulsion” seems like a very adaptive mechanism, and consider how well the G-spot is situated in order for the baby’s head to press on it in a timely manner.
I’m speculating here but I’m not referring to an unheard of phenomenon. Some women do report experiencing an orgasm during childbirth. Given that in Western countries most births happen in the public space of the hospital, and given general attitudes towards sexuality and its decoupling from child birth, a woman might feel shameful to having had an orgasm during birth, or at least about admitting it, in the same way that she might be ashamed for being “turned on” when breastfeeding. In addition, given that pregnant women are heavily drugged during delivery they might be too desensitized to either have an orgasm trigged or to feel it if it did occur.
The pleasant feeling of the stimulation of the nipples facilitates breastfeeding,35 also an adaptive behaviour. As for the pleasure of anal sex, my impression is that it arises not from an erogenousity of the rectum, but rather from the distal stimulation of adjacent organs, namely the prostate in males and the G-spot, clitoral legs and the crevix in females.36 But in the male case, we only shifted the question, which is now “why the stimulation of the prostate pleasant?”. My guess is that the pleasure of anal sex is evolutionary irrelevant in the same way that it’s irrelevant that fingers fit keyboard key so well, or that the protrusion of the ears is perfect for holding eyeglasses on. In relation to the ‘state of nature’, anal sex is rather hi-tech: you need good lubrication. I don’t know whether there is something in nature that is readily available; my guess is not. Oil has been around for maybe ten thousand years, presumably not enough time for “anal pleasure” to evolve. But, either way, how would it have been adaptive to engage in anal sex? I presume that in men this pleasure is incidental (a “spandrel”) to the adaptive stimulativeness of the female prostate, the so called “Skene’s gland”, which might be behind the erogenousity of the G-spot.
Another pleasant touch beside the stimulation of erogenous zones is the interpersonal. Hugging, caressing, cuddling &c are widely enjoyed. To address the evolution of their pleasantness a dynamic approach must be employed. What is reciprocal on the level of conspecific individuals is reflexive on the level of genomes. That is, if a population evolves whose members enjoy giving and receiving hugs, it is because there has been a shift of the “average genome” towards having the genes that would make one inclined both to give and receive. I said “reflexive” because the gene that likes the reception of hugs in one member is the same as that that in another. Or, to take another example, if there was a gene that facilitated cooperation, while on the level of the conspecifics it would be a cooperation between members, genetically it would be as if the gene is cooperating with itself, with its own replicas that are within the other members. Hence the need for a dynamic approach: presumably for the trait to evolve, that is, for example, the tendency to hug, it must be beneficial even if the recipient does not possess it, otherwise it would not be adaptive.
Alternatively, it is possible that this trait evolves very marginally, such that some “mitigated” version of it first evolves and becomes widespread, then it marginally intensifies in one member in a way that is adaptive in the new social environment such that it spreads to the rest of the population, and so on.
For one thing, as far as toddlers are involved, it would be adaptive for these to hug when it puts them under the protection of a parent, and for the parents It would be adaptive to hold their children close and protected. This is also true for adults: stress reactions are adaptable in cases when one might expect to need to imminently fend for oneself. A hug communicates that there’s a protective other nearby, and it would be adaptive to decrease a stress reaction in this situation as one is more protected: that is, stress response is adaptive when it is trigged during danger and maladaptive otherwise, and having a fellow on one’s side diminishes the danger of a situation and therefore render the appropriate stress response lower than otherwise. For example, if the potential danger was a fierce animal, or an angry human being, it would probably be less likely to attack a pair or a group rather than a single individual. Beyond stress relief hugs could be employed in order to form such coalitions, and having such a coalition being a good thing, it could be expected that forming them would be rewarding, that is, that it would feel pleasant.
Proprioception
There seem to be some patterns of movement that are inherently pleasant for humans to execute. I’d like to discuss three kinds: walking, athletics and dancing.
That humans, like other animals, should have a drive to displace themselves seems reasonable: unlike plants, animals must visit different locations to satisfy different requirements to survive and propagate: to find food, water, shelter, mates. As alluded above, many organisms, including humans, behave in advantageous way not due to an understanding of what is right for them to do, but simply by having the proper inherent motivational system ––– a system of so called “fixed action patterns” which relates a stimulus (including such as an internal state) to some appropriate reaction. Further dissipating the need of any “understanding”, this system can be modular, so that the result of one inherent behaviour provides the triggering stimulus of the next (as with the aforementioned example of cats and their toilets) and the animal needs not “understand” nor see the relationship between the sequence of drives for it to execute it correctly, assuming that the circumstances are similar enough to those in which this sequence evolved.
Therefore, unless there’s a “more pressing” stimulus inducing the individual to act another way, a drive to walk when idle can be advantageous by facilitating exploration, which might lead to the discovery of new resources. This is not unqualified, however: exploration might also lead to discovering –– or being discovered by –– danger. For one thing, humans and other diurnal creatures are vulnerable to danger particularly in darkness, so it should be adaptive to dampen the motivation to move at night, an adaptation which might be behind the so called “seasonal affective disorder”. Disorder or not, anyone who lives far from the equator is familiar with the dispiriting effect of earlier sunsets and shorter days following the onset of winter.37
This is somewhat tangential to my hereby exploration of “aesthetics”, but it’s noteworthy that walking facilitates thinking. If I am not to rely on my own experience or the account of erstwhile thinkers, I can bring forth a paper from three years ago that found that walking facilitated creative thinking.38 Why should this be the case? I can speculate. Thinking is metabolically very costly: the human brain consumes around 20% of the body’s energy. If walking is used for exploration, and exploration leads to novel situation, it stands to reason that the costly thinking would be reserved to these situations when it’s probably going to be needed the most. Alternatively (or in addition?), despite our intuition that walking is “easy” and “unconsciously controlled”, our bipedal locomotion is a computationally challenging act of balance. Unlike quadrupedal mammals or swimming cetaceans who start moving like adults do seconds after birth, humans need roughly a year postnataly to be able to walk independently. Deficient executive function –– of which attention control, reasoning and planning is a part –– is correlated with more falling,39 while walking (as opposed to being seated) is correlated with impaired cognitive performance in arithmetic tasks in older individuals.40 Those might seem like contradictory evidence ––– walking both hinders and encourages thinking ––– but it seems like what happens is that thinking “changes modes”. The Oppezzo and Schwartz paper38 discusses exactly this point and speculates on the trade-off between convergent and divergent thinking during (and right after) walking.
That walking is or can be enjoyable seems to be clear to me. When idle, for example, while waiting for something, people would often start pacing around if they have the freedom to do so. People go on walks for their own sake. If the adaptive pressure was not towards environmental exploration, perhaps it was due to this enhanced creative thinking? That creative thinking could be helpful seems obvious, so perhaps the drive to walk was adaptive by turning people to think creatively?
That is somewhat less likely. It makes sense if walking stimulated creative thinking which would assist one with coping with the novel situations one gets oneself into. However, if “creative thinking” was “universally good” (independent of situation) then humans would simply involve into having a more creative thinking as a constant trait with no need to involve physical movement. On the other hand, it is possible that creative thinking is good when one is not engaged in a particular goal-directed task, i.e. when one is idle, and if there’s already a system that turns on creative thinking while walking then it could be for whatever brain-architectural reason easier to evolve a drive to walk while idle (to facilitate creative thinking) than it is to evolve a mechanism that leads the individual to think divergently when idle.
Or, here’s an alternative. Perhaps it’s not about the difficulty of the architectural evolution, but really about the end product. Perhaps it’s an advantage to have walking as a proxy for idleness. Perhaps “idleness” is a rather ill-defined state, and having it being proxied by non-directed walking is a safe-guard from misfires. Consider that the brain is the controller of the body, and that “divergent thinking” is a description of the state of the controller.. in a sense something that is beyond the control of the brain, though of course it gets a little murky to talk about it this way and one would have to go to a finer resolution. Instead, imagine this scenario: a human goes hunting with a small band.. They detected a deer and are trying to stealthily approach it. Alice, our human, is now crouching by a bush, and is now waiting. She is part of a well trained and well coordinated group, and she knows that at the moment she must wait until some event happens, e.g. the deer moves in a certain direction or some of her bandmates move, before she herself moves. Had “divergent thinking” evolved to be trigged directly by “idleness”, for example in a state when one is not moving and waiting for something to happen, Alice would have started thinking about all kinds of things unrelated to the hunt and could potentially get completely absorbed in her thoughts in the point of not noticing the events she was waiting for and to the point of otherwise interfering in her function within the hunting band. Now, what if idleness triggered a drive to walk which triggered divergent thinking? Perhaps the crouching Alice would have felt like she had enough crouching and would have a desire to get up and stroll, but she knows that she is within an ongoing hunt and so she is not at liberty to move as she pleases, but must remain still and silent.
What’s the difference between suppressing the will to move, as in the latter case, and suppressing divergent thoughts as in the former? The difference is that the latter requires merely a control over the body while the former requires the control over the mind, that is, requires the mind to control itself. This is of course but a shorthand: the movement of the body is a function of neural activity, driven by the brain. But as I tried to allude earlier, controlling limbs and controlling the pattern of thinking are phenomena of different “orders”/ “meta-levels”. People can and do to a certain extent control their thinking, but a certain pattern of thoughts is something that “happens” to the mind rather than something that it does. If you do not find it convincing, then I would direct you to anecdotal evidence, perhaps to your own experience. People significantly more commonly suffer from the inability to control their minds, being afflicted by intrusive thoughts, being worried about things when they know its irrational, being unable to quiet their mind and fall asleep and so on, than from the inability to control their limbs. I’d even conjecture –– and hope you agree –– that most people would experience at some point themselves being mentally occupied with something they don’t want to be occupied with, and that most people would die without ever experiencing a feeling of being unable to control their bodies.
What about athletic activities, such as running? I suppose that at the base of it, running is not inherently enjoyable, which makes sense. It’s expending quite a lot of energy and would be wasteful if people did it for no external reason at all; walking is just fine if we want merely to move ourselves. Running causes pain, and so one needs a motivation in order to offset the “price” of this activity –– for example, one would endure that pain when chasing prey ––which would provide food–– or when running away from predators, which would save one’s life: better to suffer a bit than to die in starvation or as a meal for somebody else. Nowadays people would also run for no other reason than for its health benefit, but that, too, is an external motivation.41
In addition, there’s the so called “runner’s high”. I suppose that in a sense it operates on an “assumption” that is similar (putatively by me) to that behind the female orgasm during birth: there are circumstances in which pain perception is a disadvantage. While the sensation of pain is adaptive as far as far as it prevents humans from unnecessarily expanding energy through running, it is maladaptive if It hinders humans achieving a needed goal through running. Now, it is not that pain completely vanishes; it is still required in order to set the pace, prevent individual from killing themselves through overexhaustion. However, had an individual ran for a considerable amount of time ––despite the pain, and therefore supposedly due to some justifiable motivation–– and was running at a “non-lethal pace”, a relief of pain would help them to achieve that goal they were striving for and into which they already invested much energy.
I think that the discussion of the aesthetic aspect of dancing, that is, of its production, one must, given its social aspect, consider the perception of dancing. People also dance alone and as a performance art, a case where there’s a clear distinction between “performer” and “audience”, but I’d say that these, too, originate in dancing as a social activity, and even more specifically through “courtship dancing”. Many dance forms are structures around a male and female dancers, many group dance forms are centered around revolving couples (like an arrangement allowing for a fair distribution of interfacing, with everyone dancing with all potential partners), and the dance floor is an arena where hooking up often takes place. Some dances are better than others, that is, some dancers are better than others, that is, they make dances that are largely more appreciated, that are more attractive (I’m equating here, as in the more general case of stimuli in the beginning of the essay, a good dance with the attractive dance). I’d say there are 3 different components that go into the evaluation of a dance and though they are to a greater or lesser extent distinct, like the personas of the trinity, they are not independent of each other.
- Intention, or interpersonal signals. A dance can express flirtatiousness, and when it’s directed at ourselves, and the dancer is attractive, then the dance would be attractive, that is, increase the attractiveness of the dancer. If the dancer is unattractive, their flirtatious dance directed at us might be disconcerting and we might want to move away –– in other words, the dance would be repelling. Alternatively, if the dancer is attractive and the dance flirtatious but is directed at somebody other than us we might feel our chances are low and that any courteous efforts on our part would be in vain and that it is better to consider other people, in which case, again, the dance would be therefore repelling.42
- The dance moves. Both males and females prefer (watching) dances which display higher variability.43 Dance can potentially signal health and advantageous traits, that is, “fitness”, and it does so increasingly well by being more dynamic and thereby display motor control, stamina and perhaps also creativity. Something that seems to me unduly neglected in the literature that discusses dance evaluation is the motions of the face. It seem natural to me that people would mostly think about the movement of the body when thinking about dances, and in a way the methodology of most experiments investigating dance precludes facial expression from having any effect: they tend to use “light point” dancers, basically showing dancing “stick figures” to the evaluators in order to remove the effect a body shape might have on the evaluation. However, facial expression, it is my opinion, can greatly affect the attractiveness of dance, and not just through interpersonal cues (discussed just above). First, as far as the motion of the limbs narrates a story, facial expression can add up to it, concurring with the motions or adding another emotional aspect to the dance. A good control of the face, therefore, displays further motor control, and, again, potentially creativity as well. Moreover, it can display effortlessness. Being exhausted shows on one’s face, and it also shows on the face when a person is panting, for example, and the lack of any such features is a “honest display” of the performed dance being effortless. That, or the dancer is exerting strong control over one’s face, which is by itself laudable (though, of course, given the fitness of the dancer and the demands of the dance, the former has to have a certain oxygen intake within a period of time, and so “faking it” could only last so long). Beyond the physical effort of the dance, the coordination of the dance renders a cognitive effort that the dancer must cope with, and which, too, I believe, might affect the face. I noticed long ago that some drummers would contort their face and mouth in various ways as they drum patterns that are beyond the very simple. One can see it watching John Bonham drumming, for example, but he is by far not the only one44 Since these motions of the face seem to appear only at rhythmically difficult patterns, and since this difficulty is subjective, these movements of the mouth and their absent could indicate how effortful the dance is on the cognitive level. Maybe. Given that dancing is more aerobically taxing than drumming but simpler rhythmically, that is, in motor control, I suppose a calm or otherwise controlled face would mostly allude to the physical easiness of the dance.
- Physique. That people are differentially attracted to different body shapes is obvious, and therefore that how attractive a dancer is would depend on their body shape. However, the motion a body makes affects how its shape is perceived. That is, while a still body (when simply standing for example) might give one visual impression about its different ratios, a moving body would give a different (dynamic) impression.45 I think one anecdotal example would be the footage of the Australian hurdler Michelle Jenneke’s “warm up routine” which “went viral” on 2012. As an athlete of course she was fit, but in addition her warm up included a jump with a sway of the hips which is akin to the sway of female models on a catwalk but more extreme, and her wearing very little her contours show well. I think that her smile and her extraordinariness ––– she acting sprightly amid the serious aides and the other athletes ––– played an important part in the video standing out, but I would ascribe the main allure of the video to the impression her jumping made of her figure.
It does seem like people enjoy dancing, and given that people assess other people’s potential as mates (and perhaps as comrades too) also by the latters’ dancing (if an opportunity comes up), this enjoyment of dancing is adaptive, helping us finding or soliciting a mate. This enjoyment is nonetheless instinct, that is, independent of whether one at the moment is trying to gain someone’s heart, though, all the same, dancing is more enjoyable in a social context rather than alone.
Temperature
This is somewhat trivial, but I want to include it for the sake of completion. Humans have a preference for ambient temperature, and whether they are too cold or too hot, a restitution of a convenient surrounding temperature is rewarding. This ideal temperature, I suppose, is around 20°, but it is probably dependent on the metabolic state of the body. I suppose the perfect ambient temperature would be such that would keep the body temperature at homeostatic levels with the difference between the ambient and ~37° being made as a side-effect of metabolism, with the body putting no or very little work just to generate heat (through caloric burning) or cool down (through sweating).
Being somewhere around 20°, I suppose that this is the temperature where temperature-homeostasis can be kept simply as a “side effect” of a conveniently operating metabolism, such that the body doesn’t need to put extra work just to put the body heat at its desired temperature (too cold) nor activate a mechanism to reduce it (such as sweating) or slow-down metabolism (too warm). There’s also preference for food temperature *food temperature
Sounds
The auditory system receives as a signal the vibration of air, which conveys, like vision, information about the mechanical environment. While vision conveys information about surfaces and their spatial arrangement (“the oak is behind the birch”), sound carries information about “content”/depth/volume, that which has thickness and is occluded by surfaces (“the oak is hollow, or dense, or hard, or soft, based on a tap on the trunk”). The loudness, pitch and timbre of sounds convey information about the distance, size and “shape” of their source. As far as living creatures are able to produce and shape a voice, auditory perception could be the receptive side of intra-species communication, and to an extant of inter-species communication (not always voluntary).
One can expect inheret reaction to certain sounds, whether on a repulsion/ attraction scale or otherwise, if these sounds or some quality thereof had been continuously indicative of phenomena that were relevant to survival over an evolutionary-meaningful time span.
I’ll start the discussion with an unpleasant sound: a baby crying. What characterizes a baby’s cry? It is pretty optimized to grab attention. It is high pitched, which is an outcome of babies’ small size, but it is also easier to produce loud sounds that are high pitched than low pitched, and the frequencies they cry in are those that the human auditory system is the most sensitive to, that is, perceived as loudest. Their cries are intermitant, as they have to breath. The cries are changing slightly in pitch, either undulating or simply rising or falling, which helps the crying being salient on the one hand, and probably makes it a little easier on the vocal cords, given that babies often need to cry for prolonged stretches of time. The aversion of adults to this sound is clearly adaptive: being noxious they want to put it out, which acts as a strong imperative for the adult to take care of the crying child.
It is my opinion that the unpleasantness of “classic noxious screetches”, such as an unfortunate stroke of chalk on a blackboard, knife on a bottle, a train braking &c stems from these being a superstimulus, that is, an extreme form of the component of the stimulus to which the “babie’s cry aversion” responds to. A constant high pitched sound might be unpleasant but, it seems to me, when it has slight variations in pitch it becomes that much more abnoxious, and these variations, which are in the crying of babies, are also there when a chalk scrapes a board and in other such sounds.
Interestingly, after I formed this idea I stumbled upon a paper that mentioned a comparison between scraping sounds and the warning cries of monkeys and found them to be similar46, and concluded that the human reaction is a vestigial reflex. While on the one hand it seemded to somewhat confirm my suspicion if only because somebody else had thought along the same lines, the conclusion sounded absurd to me and even scientifically dishonest. First of all, it’s a little odd that the scraping sounds were compared to the alarm calls of monkeys and not, for example, to those of Chimpanzees which are evolutinoarily much closer to human, unless, of course, the smaller monkeys make sounds that are higher pitched, therefore producing the results one had seeked. By the way, while the alarm calls of Chimpanzees are way off chalk on blackboard, they do cry/whine similarly to human infants.47 Second, again, there’s a human produced sound that I’d bet without checking is as similar if not much more to the irritating scraping sounds than monkeys’ alarm call, and so to make this interspecies (and, in this case, inter-taxonomical-families) comparisons is going way too far. It wouldn’t make the, if you will, Occam’s razor’s cut.
I’d like to discuss two divisions of sounds seperately: natural and artificial (manmade). A different way to categorize these is as environmental sounds and sounds rendered arbitrarily at the self by other agents, that is, audible communication, whether the signals are evolved, such as crying, or learned, as in semantic language. In the case of the latter, the agent must not be another human being but can be, for example, a cat and its pleading meow.
Among pleasant natural sounds we have the sound of wind sweeping trees, the sound of flowing water, the sound of rain ––– all of which, it seems to me, are some kind of colored noise. Would it be adaptive to find these sounds attractive? I’d say so. Being attractive, such sound could motivate human approach and settle in environments that are suitable for them: trees and vegitation suggest sources of food, possibely of shelter, and flowing water is indicative of, obviously, a source of water, and is a proxy of food presence: whether of ediable plants or huntable animals that are nourished by the water. Rain, too, would similarly be indicative of a non-arid locality, though given the aversive experience of being under the rain speak against the inclusion of this sound in those that promoted the evolution of the attraction to this profile of sound.
Is it likely to expect that such an attraction would evolve? One could presume, for example, that speciemen (whether humans or otherwise) that lived in a favourable environments survived while others didn’t, and that thenceforward the offspring of the former just learned within their lifetime what is necessary for their survaival (sources of food, water, shelter) and just stuck within the boundaries within which the environment provided that. Whenever the population growth exceeded what the locality could sustain some wanderers might have left to seek another place to settle in, and chose a new one, again, by relying on the knowledge they had acquired of what constitutes a good environment, instead of relying on inherent preferences.
But what is knowledge? If we think about it as an association/ remembered relationship between stimuli, for example, between the round red of an apple and the satiation of hunger, then for this knowledge to be useful it is required that the individual pays attention to the right stimuli and ignore the rest of the perceptual input. Humans as well as many other mammals seem to be pretty good about filtering the relevant perceptual information in order to make sense of individual “objects”, but are they as good at this kind of learning when habitat is concerned? Unlike an object which is circumscribed as a source of perceptual experience, an environment is in a sense the entirety of the experience.48
Is the environment merely the sum total of objects? If so, individuals, when choosing a place to settle in, could simply tally that which is desired by them and is present. If anything is missing, they move on. But this has two issues. One, it might be an intractable task, as those types of objects which had been enojyed by the individual in past environment could be inordinate, and of course we don’t assume that animals ––human or otherwise–– walk around with clipboards like some surveyors. Two, it would be problematic if a certain locality is rejected because it was missing something that had been enjoyed in a previous environment. First, because it is not clear that through learning a person could figure out what is essential and what is simply pleasant; Bob found an area with fresh water and plenty of food but it lacks those pebbles he had known which were so fun to skip on the water, so he continues searching, dying of hunger in the process of settling at an environment that has both pebbles and food, but much less of the latter than was present in the first location he visited. Second, because it is not clear that a person would correctly generalize. What if Bob, who had been nurished by fruits and nuts, had to leave and seek another place? And he reaches an environment that could support him through fish and some small animals, but which lacks all kinds of fruit? Had he recognized the environment as favourable he would have settled in it, and once he was hungry for long enough he would overcome his squeamishness and try to satisfy his stomach with something he had never tried before.
For patterns of perception (whether auditory or visual: see below, in the future) to evolve to attract individuals to settle around sources of these, they must have been consistently indicative, that is, correlated with, hospitable environments. That trees and running water, and therefore their associated murmurs, were thus correlated is something that I can but hypothetically speculate.
On the same note, another pleasnat “environmental sound” is the chirping of birds. Is it that birds’ song is attractive because humans evolved an attraction to (manmade) music? Or was it that humans evolved to be attracted ot bird song and thence appreciated music in general?
As attractive sounds, they might motivate humans to settle at specific localities; the former two being references of specific features, and the latter of a specific climate. Assuming certain “geographical features” were more conducive to prehistoric human survival, it stands to reason that whatever mechanism that might help stir humans in the right direction would evolve. Vegetation provides shelter from the elements, food (in the form of both plants and herbivores) while water provides drink and is a proxy of food (other animals drinking from the same source and plants nourished by the water) [I’m also a supporter of the “aquatic ape theory” and as far as water was advantageous to human evolution, attraction to the sound of water can be expected]. I couldn’t find any academic paper asserting thus, but I think it’s safe to say that humans generally find the sound of the human voice pleasant, which would be beneficial ––– if anything else ––– to keeping humans in groups.
What about music? Music consists of a highly artificial arrangement of pitches of various durations to a meter, and seems to be more engaging on average than simply the pleasant sounds of nature. I’d say that the natural sounds have a soothing calming effect –––– indeed many nature recordings on youtube are labelled as “relaxing” and meant to “help sleep” –––– while music can be highly stimulating, whether energetic or provoking emotions of sadness.
Darwin thought that music had to do with sexual selection. That seems reasonable, and congruent with the apparent sexual appeal musicians have that is beyond that which is attached to other skilled individuals. But what is the advantage of musical ability, or what does it signal? If musical ability is advantageous for its ability to attract a mate, what is the advantage in being attracted to a music-maker? I’ll suppose that as music is old, for most of the duration in which there was an evolutionary pressure, music ability consisted from (1) singing and (2) music production by tools, perhaps mostly repercussion. The latter demands a certain dexterity which it directly presents, while the combination is mentally taxing and alludes to a capability to learn. Singing might be physically demanding, and thus signals physical fitness.
But to explain music we must, I think, go back a little and look at the evolution of language. I have had a certain toy theory that language had evolved from the attention and interpretation of the breathing of conspecifics. Still today inhaling and exhaling is expressive: sighs, gasps, yawns, pants and so on. Unlike the evolution of a communication that uses arbitrary signifiers –– which is really the coevolution of their expression and of their perception –– here we have only an evolution of the perception, since the expression is an end product of other metabolic processes. Just as there’s a reason that heart rate goes up and not down after a person runs or gets excited, there’s a reason why a startle makes people quickly inhale or why annoyance or the end of patience is manifested as a quick inhaling with a stretched exhaling. I suppose these expressions might be exaggerated, but I suspect they originated as a side effect rather than as media of a communicative agency. Further, I suspect that attention to pitch (or, more precisely, it’s variation: perception of speech per se, “perfect pitch”, is a rare attribute) is also arising from the interpretation of these telling signs of breathing (and further perhaps from their appearance within a sentence, that is, prosody*) and which gives the emotive power to certain succession of pitches. That is, I suspect that a “major key” is happy while a minor one is sad not randomly, but due to the stereotypical unfolding in time of pitches during say a gasp, exhilaration or sadness.
{bridge?} Many animals use arbitrary signals for communication. From semantically simple expressions such as warning calls (warning conspecifics from danger), threatening calls, mating calls such as bird songs to more sophisticated ones such as apes employ, including “sign language” or such as cetaceans use, there’s a signal that stands as an arbitrary signifier. The emergence of a symbol is the double emergence of the expression and the perception. As far as expression is spontaneous, it seems an easy accident to come by; a creature, be it an animal or specifically some proto-human, needs to have some proclivity to gesture or make a sound (think of it like the urge to grab which preceeds the learning of tool usage). As far as this behaviour does not procure heavy adaptation costs, it may stick around. And once a receiver of this signal emerges, the expressioner –– if the learning facility is there –– can learn what the “meaning” of this expression is, and utter/sign it accordingly. If developmental progression at all parallels evolutionary progression, one might think of the babbling of babies, which precedes the learning of the meaning of one’s own expressions [%%%**] as analogous. I don’t suspect that there was a long stretch of evolutionary time where humans went about making meaningless noises. With some mammalian intelligence those expressions can pick up meaning quickly. A dog can learn several different verbal commands; while in this case there’s a teacher, that is, one side with the absolute knowledge such that the other side, the dog, approaches with time the point of understanding, one can imagine that a convergence can occur also when both sides start at a point of ignorance. Indeed, one does not need to imagine this: the emergence of cryptophasia or “twin talk”, an idiosyncratic language developed by twins, is exactly that. Here the emergence of language occurs among organisms with a well evolved capacity for language, but one can imagine a similar but less sophisticated phenomenon occurring among lingually less sophisticated brains. So the emergence of the expresser is “easy”, but what about the receiver? I think one requirement is attention. Once attention is there, learning can occur. What I think is that one “faculty” that evolved which enabled the evolution of language is attention to (auditory) repetition. Repetition is a strong signifier of intent. An example I like is that of door knocking. It is not random or arbitrary that people tend to knock on doors three times. A single knock might be people moving furniture in the stairwell and banging it accidentally against the door; it might be a misconstrued sound; it might be a complete auditory hallucination. A second knock reduces the probability of the above and a third knock –– especially if creating a second equilong interval ––– drops these probabilities to the floor. It would be a freakish accident if something in the stairwell raps in constant rhythm three times against the door; the first knock eliciting attention, it helps locating and correctly identifying the source of the noise; and most people don’t imagine such a series of knocks. With behaviour a repetition, therefore, is a strong signal that the repeated phenomenon is not something that “happened” to the person, but rather an act of agency. And I’d say that that animals take advantage of it is seen in their repetition of calls (for example chimpanzee calls), as well as in humans: baby babble is often a repetition of a single syllable, or the phenomenon of palilalia that occurs in people with Tourette’s { This might sound far fetched, but I think that that people verbally respond to sneezing but not to coughing is the other side of this coin. Sneezes frequently come in pairs or more. While coughing can be constituted of several explosive exhalations, they usually vary in sound so that one ends up with a single “coughing bout” rather than a repetitive sound. Sneezes, on the other hand, are usually standard in sound among each person and sometimes even come out in even intervals, that is, they created a repetition that to human ears (or brains) sound “intentional”. The verbal acknowledgement, whether by the company saying “god bless you” in English speaking countries or by the sneeze itself saying “excuse me” in Japan, functions to absolve the perceived meaning. The semantic content of the response is really secondary to the function of dissolving the hanging tension of accidental meaning}. Assuming that there’s no de novo emergence, as it were, of a new behaviour among people with Tourette’s, I think a certain instinct for repetition is unleashed in these cases which hints to an “instinct of repetition”. {An example where a repetition of a word creates a new menaing”} I think most of us would associate “pleasure” first of all with bodily, that is, tangible, sensations. This is because the experience of physical pleasure is immediate and engulfing as opposed to other sensory pleasing impressions. But this immediacy also shifts our minds to the subjectivity of pleasure and away from how it affects our behaviour. The pleasing is attractive: it does not repel us (which is what the painful does) nor does it leave us indifferent. It steals our attention; it compels us upon itself. This is why the thought of a partner thinking about something else during sex is potentially insulting: essentially it suggests that they are not being pleased, do not experience pleasure, and therefore merely “performing”. Like with touch, the pleasing sound is the one which draws our attention, which compels us to put our mind on it (both internally with our mental faculties, and externally by keeping the stimulus extant). The pleasure itself is the subjective experience of being drawn, it is the measure against which we weigh the alternative of not attending to it. Repetition is necessary in music, and it is what makes it pleasurable. Without it is chaos, that is, noise; with it is pleasure. With it it sounds intentional, which draws us to it, which makes it interesting. Too much repetition and perhaps our mind gets the idea that whatever content, information, was encapsulate in it, it was extracted; no more (new) information is contained there, and so it becomes boring. I think the auditory repetition that is inherent to rhyming and alliteration is what makes it enjoyable the way music does. Wikipedia states, “Rhyme partly seems to be enjoyed simply as a repeating pattern that is pleasant to hear” –– but I think what is pleasant is not the pattern (of constants and vowels, presumably) that is being repeated, but the repetition itself. And when rhyming occurs in metered verses, the repetition in sounds occurs at even intervals, further eliciting the “perception of intention” the same way the third knock on the door does. My theory, therefore, is that the evolution of attention to auditory repetition is that which enabled us to develop verbal communication, first a primitive one and then a structured language. It was advantageous to make sounds that were distinguishable from accidental sounds and other environmental noises, and advantageous to addend to such sounds of others. Once creatures make noises that qualitatively very different from accidental noises (either of their own bodies or of the environment), listeners can separately treat “intentional” and “accidental” auditory phenomena, and from there the intentional sounds can be dressed with arbitrary meaning. Thence the advantage of cooperation among kin is obvious. Once characteristics emerge (that is, their identification evolves) that differentiate intentional and therefore meaningful sounds from others, these characteristics being attractive, then they can be taken and amplified into superstimuli. Indeed, I think that the pleasure of rhyming and alliteration stems from these being exaggerated characteristics of what differentiates intentional noise from accidental. I say “superstimuli” since from the point of view of the evolved listening apparatus there is no reason to heed rhyming speech over non-rhymed one; both are generated intentionally by conspecifics and their “form”, their degree of “perceived intentionality” (intentonality?), does not allude to the usefulness of the conveyed meaning.{This is not strictly true. Presumably the capability of producing rhymed speech is depended on certain cognitive aptitudes and therefore allude to both genetic and developmental fitness and therefore suggest the agent as either a deserving mate or collaborator, or both} Nonetheless, humans mostly behave as if it was the case and pay more attention and give more importance to the content of words of a good form. {Aside from some humour in the musical composition per se, the humour intrinsic to the Ylvis “The fox (what does it say?)” song, whose video is one of the most watched ever on youtube, rests largely on the gap between the obvious investment in form and the obvious inconsequentialness of the text of the lyrics} Indeed, like Pinker I think that music is auditory cheesecake, and like with cheesecakes we might forgo nutritious food if it doesn’t have enough sugar in it. “Monotonous” is one adjective to describe a boring speech, that is, one that it is hard to concentrate on. At times and for some people, it does not matter how much they want to listen to that speech, something in their brain simply just not register this intonation-less voice as a sounds worthy of attending to, and the mind drifts to other thing. With a certain chagrin I now retrospectively recognize the correlation between the amount speakers ––– former professors of mine, lecturers on youtube and so on ––– modulated their tone, and the interest I had in what they had to say.{I’m in favour of running a certain big data experiment, and looking whether there is a correlation between voice modulation/ amount of variance and views/likes of youtube videos that are speech only, or at least within certain topics/categories. I think there’s also a phenomenon of people intonating but “wrong”, as if not having a natural perception of that but still trying to fake it. But these are really rare, I’d think} I think that what differentiates them, or, rather, their lectures, is not who or what they were, but rather what they did or thought they were doing (which also affected aspects of their lecture other than intonation, but I think the latter is a significant part). The monotonous speakers come to recite; there’s information they came to deliver whose value is agnostic to the tone in which it is delivered. Therefore it is enough to just say all the right words in the right order and the task is essentially successfully completed. The tone modulators, on the other hand, don’t “give a talk” but “talk”; for them the lecture is a social interaction that is to be handled the same way that one handles any one on one conversation. That people (most people?) speak differently when delivering a lecture or presentation was obvious to me for a while, but I remember it being humorously striking when a friend presented before me a presentation she was to give a few weeks thence. We discussed whether she would “present” before me or simply “tell me the contents”, and as I remember it we settled on the latter. Nonetheless, what she did was “present”. Since it was at informal settings, one on one and outside in the park rather than in a lecture hall, the boundary between our “organic interaction” and the “presentation” was between two adjacent sentences, and so I could immediately hear how her speaking changed. I’d predict that one can predict how well-received the talk of a person would be by a “general audience” based on how difficult it is to tell (based, say, on a short sound clip) whether they are “lecturing” or “talking”. Or, even more narrowly, whether they are at the “talk” part or at the “questions and answers” part of their lecture. This might be slightly far fetched, but I imagine there might be a certain interaction between teacher’s and students’ communicative style, and how well these students excel at school. That is, that while all is equal when a teacher is speaking to the class, in cases you have a “reciting” teacher, students who are less sensitive to intonation (in other words, their attention is less attenuated by changes of tone) would relatively do better than ones that do, simply since the latter would find it harder to concentrate, being used to verbal communication being delivered with a certain relish, let’s say. If you were paying attention, perhaps you noticed that I switched from talking of repetition as attractive/ attention-grabbing, to talking of intonation as such. I’m guilty here of a slight of hand. Both repetition and modulation of tone are components that are important to music and skillful crafting of which is necessary to make a piece of music pleasant. I argued for the evolved attention to repetition as the evolution of attention to agent-driven sound generation, and while I first thought that I will not do that same with tone, upon further thought I imagine that something similar might have happened with tone as well. In one paper Justus and Hutsler {Fundamental Issues in the Evolutionary Psychology of Music, T. Justus and J.J. Hutsler, 2005} elicit the mental faculties shared both by music perception and speech processing to counter argue others’ claims of definite evidence of natural selection operating in the realm of music perception. It goes well with my idea of music being a superstimulus of auditory-attention, but I want to evoke something else from the paper. They mention that harmonics –– which are important in music ––– are a property of most auditory phenomena in the world. When something in nature makes a sound, it is never a perfect sinusoidal pitch of a single frequency, but always a complex stack of frequencies, many of them being of certain frequency ratio of each other (due to the nature of how things vibrates, a la “standing waves”). There may be many different frequencies there, but the human ear “correctly” recognizes them as being emitted from a single source. Whether intrinsic or has been learned, just as the brain successfully organizes the visual input into distinct object based on color, boundaries and so on, so does it the auditory input by regularities of frequency and timing. It’s difficult to imagine such a sound in a pre industrial-machines world, but perhaps a constantly repeating sound is more indicative of a circular process rather than the intention of an agent, such that keeping attention on it would be disadvantageous; indeed, if it’s continual, one might completely lose their attention for a long period of time on, for example, the sound of rain. Alternatively, rather than being a property of audition, perhaps it’s an extension of a general property of the brain dealing with long and unchanging stimuli that seem to matter very little by stopping paying attention to them. Just as we stop heeding the feeling of the clothing on us, or seeing the frame and surrounding when watching a movie, or seeing our own nose, so we become impervious to the sound of a ticking clock in a room, or the traffic outside, the ventilation and so on.{This is why, of course, people would put on “white noise” or the recording of rain or a river flowing and so on when they want to concentrate, or relax. The auditory input is taken over completely by a regular sound that is soon ignored by the brain and which is loud enough to mask any otherwise distracting sounds and limits attention to the other sense, or, in the case of relaxation, to inner thoughts} }While sounds of little semblance or temporal organization are unheeded being perceived as noise, a sound too constant is regarded as emanating from a single source but due to its unchanging nature the information it conveys are quickly registered and thence it is “filtered out” as being part of the background noise, and will only return to attention once ceasing or changing in quality. On the other hand, sounds that are similar enough to each other for example by sharing a timbre to elicit the perception that they belong to a single source and organized temporally to suggest they belong to an agent, yet are different enough to indicate a changing, dynamic agent, would continually recapturing our attention. I suggest that the evolution of language took advantage of that faculty by driving humans not only emit sounds that are repeating but also ones that vary in tone, that is, pitch, and, more specifically, by varying pitch according to ratios that reproduce harmonics that the brain recognizes as a single entity. This would be a kin to breaking down a harmony , a chord, into melody, a temporal succession of sounds. It is reasonable to believe that the same notes that when they sound together elicit a harmonious chord would be recognized by the brain as belonging to a single source when stretched in time as a melody, not only accidentally, but also “on purpose” as different frequencies of a natural sound decay with different speeds, so that a “single sound” would sound differently over time. An adaptive way, therefore, to grab the attention of conspecifics via sound would be to produce sounds that repeat, that resemble each other, but which vary over time in ways that mimic the auditory profile of dynamic sounds of single sources. People naturally do it in speech, while rhyming and alliterating is a superstimulus to this cognitive apparatus evolved to enable human verbal communication, while music is a further step in the super direction, not being constrained by the limitations of speech or even human organic sound production once music instruments were introduced (and repercussion instruments were probably introduced very early on, via the process presented in the tactile section).
It’s hard to believe, however, that music is merely a superstimulus of evolved adherence to speech. Either there was something else evolving on top of speech perception or, which seems to me also very likely, the evolution of adherence to speech has had more to it than attention, or the state of “attention” involves more than directed information processing. Speech has syntactic content which makes its effect very rich and
-
This is an the ideal case. In actuality there’s also a give and take between the artist’s vision and her skill & other resources available for creating the art so that often either the vision is not faithfully realized or it’s partially dropped for a more manageable vision, or both. ↩
-
I hope this kind of clarification is not necessary, but I shall make it since my impression is that texts about aesthetics tend to deal only with visual arts and with music and I want to stress that I am not modifying nor broadening the established concept of “aesthetics”. Ultimately I want to uncover what lies under art, though allowing for the fact that an aesthetic experience can be derived from phenomena that are not manmade.
If this doesn’t seem enough of an argument for considering taste in such an essay, on can think of the art of culinary. As an art there is some standard by which to judge it, namely, its aesthetics. Other than the gustatory sense, in culinary the visual and olfactory have an important role, but the first has a main role that is also unique to the form. Its degree of importance and its context notwithstanding, what is important here is that the gustatory sense has an aesthetic effect and an aesthetic aspect. [^TasteEvolution]: Of course this evolution occurred “prior to mammals”, but I won’t concern myself throughout this essay about the timing of adaptation, and always refer to it as human if it’s still extant with homo sapiens.
A terminological note: I will make the same distinction as exists between “evolutionary psychology” and “developmental psychology”; I would use “evolved” to refer to changes that occurred during the “life time” of a species, and “developed” to changes that occur during the life time of an individual, keeping in mind that these developmental changes themselves are evolved. ↩ -
I think it’s fair to make a distinction –– which I’m making on pure guess work ––– between the “general” aversion of “Westerners” from eating insects, worms and other invertebrates, which I’d attribute to cultural factors, and a more “natural aversion” to rotten or otherwise decomposed food. As far as I see it, the aversion from eating casu marzu is a combination of the two –– an aversion that is somewhat justified, as these maggots can, when unchewed, survive within the intestines and cause some damage before they die or get expelled. The aversion from eating a casu marzu that was cleared of maggots is more akin to the aversion from eating moldy cheese ––– a natural aversion to decomposed food? ––– than the aversion from eating crickets, a more “cultural aversion”. ↩
-
One might wonder if all the foods we have “learned” we don’t like already in childhood (a lesson we didn’t “unlearn” by avoiding these foods thence) were all foods we had eaten prior to an illness, even if these occurrences had happened outside the scopes of our episodic memory. ↩
-
How Plastic Are Values?, Robin Hanson. ↩
-
Screening and Comparision of Antibacterial Activity of Indian Spices, Madhumita Rakshit and C. Ramalingam, 2010.
Antibacterial activity of different essential oils obtained from spices widely used in Mediterranean diet, Manuel Viuda-Martos, Yolanda Ruiz-Navajas, Juana Fernández-López and Josí Angel Pérez-Álvarez, 2008. ↩ -
Does Food Color Influence Taste and Flavor Perceptionin Humans?, Charles Spence, Carmel A. Levitan, Maya U. Shankar and Massimiliano Zampini, 2010. ↩
-
For example, a recent case or phenomenon in North Korea, and Solzhenitsyn’s account in The Gulag Archipelago, both involve the consumption of children by their parents at conditions of food scarcity. Accounts of people eating non-relatives, sometimes the corpses of, at extreme circumstances, are a little more common.
https://books.google.de/books?id=P8HZ7Mn9fzMC&pg=PA574#v=onepage&q&f=false ↩ -
One result of ethology is the discovery of “fixed action patterns”, and the fact that certain behaviours are elicited more easily and frequently than others. So, for example, the impulse of cats to chase down sneakily-moving things is stronger than the impulse to bite them, which is further stronger than the impulse to deliver a deadly bite and eat. Eating is motivated by hunger, while the chasing is not. This is perhaps partially what’s behind the “ability” of cats as well as dogs to play with each other “aggressively” but not dangerously so. If a cat (or a dog) tries to eat things then they might be hungry, but a cat who chases things is not necessarily hungry. ↩
-
while English does explicitly denote reflexivity of verbs in cases where a transitive verb that is seldom reflexive is directed at the actor, such as in “he slapped himself”, it mostly employs transitive forms where some other language employ reflexive verbs. For example, in German, Russian, Hebrew and Spanish “to dress” (getting dressed) and “to meet” are reflexive, as well as “to feel” in this particular sense (with the exception of Hebrew where the transitive verb is used; the reflexive form means “to get excited”). ↩
-
Supposedly as part of a “mixed mating strategy”, where one male mate is chosen for his offspring-rearing support, but another, ostensibly of higher genetic quality but not necessarily much of a child-invester, male, is chosen to for his genetic material. ↩
-
Menstrual cycle variation in women’s preferences for the scent of symmetrical men, Steven W. Gangestad and Randy Thornhill, 1998.
Women’s preference for dominant male odour: effects of menstrual cycle and relationship status, Jan Havlicek, S. Craig Roberts, and Jaroslav Flegr, 2005. ↩ -
Female body odour is a potential cue to ovulation, D. Singh and P. M. Bronstad, 2001.
Ovulatory cycle effects on tip earnings by lap dancers:economic evidence for human estrus?, Geoffrey Miller, Joshua M. Tybur, Brent D. Jordan, 2007. ↩ -
To drive the point further, I’ll mention floor sitting. On a first glance, it seems like bare floors should not be any less uncomfortable than, say, carpeted ones, since they are flat and therefore spread the pressure of sitting evenly. However, while the floor is flat, our bottoms aren’t. The carpet increases the area of the body that touches the floor to support the same weight, and therefore decrease the pressure on the body. ↩
-
Generally speaking this could also be said of the joy of food and mating, but that would require very artificial hypothetical scenarios, for example an environment that has plenty of non-nourishing food-like things, such that a person eating these joyfully would sooner or later die. But while plenty of properties of the environment vary across biomes as well as manmade habitats, food is food and is correctly recognized by humans and being enjoyed when consumed. It’s a stable property of the environment. That is not to say that all environments have something to offer humans to eat, but in those no reward system could replace this scarcity and let humans survive, other than, perhaps, one that pushes humans to move out and away, potentially to other environments where food might be available. ↩
-
I supposed these might be considered to be so called “displacement activities”. ↩
-
I.e., toys through which a child does not narrate a story, such as dolls. The assumption being that “narration plays” have a central function of their own, namely understanding the social world and the relationships between people.
At the same time, it is hard to say what’s going on inside a toddler’s mind as she stacks cubes on top of each other. Could she be assigning meaning to the cubes and their relations?
In many ways football is very different from cube stacking. First because it’s a social activity, happening in the world and between people as opposed to happening in a single person’s mind during an interaction with the world, and second because “game” as an activity with solid rules is quite different from “playing” where rules are made on the fly and continuously evolve. Nonetheless, one can imagine that the proto versions of different foot-ball games evolved from some people first fooling around with a ball, gradually setting rules that allow them to cooperate in play and compete with each other. The modern football game is in a sense still a play with geometrical toys and with an assigned meaning to the different geometrical relations within the game: when the ball crosses a certain line one team has won a score &c. But, again, since it’s a “game” and not freeform “playing” it doesn’t tell of relations between entities in the world like playing with cubes might, at least not to the same extent. The only reflection on the world a football match casts is at the conclusion of the game, when one could say “Team A is better than team B in football”, or, by extension, “Team A is better than team B”. An exception to these are games to whom a certain relation with the world (and players) is ascribed apriori, an ascription that is like a “rule” of the game only it doesn’t govern how one is to play it, but how the outcome of the game is understood to be influenced –– or influence –– the world beyond the game. In a way, then, the knowledge conferred is more instrumental than exploratory, that is, more like looking at a watch or a compass than perusing a book or studying a leaf. For example, it is speculated that Mesoamerican ballgames were used to meditate conflicts –– an instrumental use par excellence –– as well as to, in some periods, to choose the offered-person for a human sacrifice to the god. It seems like the answer to the question whether the practitioners believed that somehow the game, as an “act” of divination, “picked” the sacrifice, or that the game merely served as a simple competition, would indicate whether the game was revelatory (former case) or merely instrumental (latter case). Some ascribed certain moves on the ancient boardgame, the “Royal Game of Ur”, to be prophetic, telling of a player’s fortune.49 But nowadays, too, if a player is losing badly he may feel that “the world is against him”, that is, somehow the outcome of the game reflects on reality. And, of course, if a person is winning she might feel very elevated. These are because people take games seriously; indeed, if they didn’t, it wouldn’t be much fun (See also). This seriousness and game-outcome-induced emotion are not restricted to the players but can be shared by spectators, as can be witnessed by behaviours of contemporary fervent football fans, or the “blue” and “green” sports clubs of chariot races in the Byzantine Empire. ↩ -
There’s an argument to be made that technological advance did not promote the happiness of human beings. However, it certainly facilitated their propagation as a species. ↩
-
Among many animals, by the way. Even in octopi!
Observational Learning in Octopus vulgaris, Graziano Fiorito and Pietro Scotto, 1992. The pdf contains also a curious cross-journal imitative tool use, namely that of Science of The New Yorker. ↩ -
A little less exemplary because while Archimedes acted upon the world (got into a bath) and realized something, Newton simply observed the world around him –– by watching an apple fall off a tree (though not on his head, at the popular story goes). At the end I’m not sure it really makes a great difference vis-à-vis the point I want to make later on in this text. Perhaps it’s premature to provide this explanation at this point of the essay, but nonetheless:
The point I shall make is that the insight that objects in the environment can be taken in the hand and be used instrumentally for the accomplishment of tasks that are hard or impossible without them did not come about from a generalizing abstraction of the material world and the properties of mater together with the knowledge of one’s hands’ ability to manipulate, but rather from direct experience of the capabilities of different objects that were taken in the hand and handled around with, an experience that was brought about not because anyone thought it was a “good idea” to take a twig and stick it somewhere or take two stones and smash them together, but because there was in impulse to do so.
In a sense I’m generalizing observational learning. On the other hand, when we see others use objects instrumentally we learn about the objects capabilities and might imitate their use, including applying it –– this existing use –– in new ways. On the one hand, sometimes we observe ourselves doing something with a rock in hand and we learn usage from our own actions, initiated with no particular goal in mind.
While Newton worked and thought during the Enlightenment when Science as a systematic investigation of the principles of the natural world had begun to emerge, nonetheless his mental stimulation and thinking brought about by the vision of the falling apple is akin to seeing another monkey smashing a watermelon against a boulder and realizing something. ↩ -
But then again, one must admit that it is impossible to have a story where a character does not interact with its surroundings. In other words, even if all inventions were merely the fruits of a detached mental activity, we would still only have stories of baths and apples since the “detached invention” does not yield a story.
And that which is true to generalizations of mechanical forces is even more so the case, I think, for simple tool use. Even primitive tools ––– their usage and production ––– are a culmination of many small accidental discoveries. I’m not denying that abstraction and generalization are taking part in this process, but the degree of people’s capacity for abstraction is variable, and even at the highest degree it is limited, both by power of imagination and by experience. Humans learn causality and the properties of objects and material in the world through observation, and I think that “reasoning” is applying former-learned causality and properties to new situations ––– this is key, since even the insight required for simple tools rests upon seldom “naturally occurring” phenomena. ↩ -
This is perhaps a silly example, but possibly instructive nonetheless. Consider video games in which the player controls a single human character within an environment that is more or less imitative of “the real world”, such as Red Dead Redemption 2, GTA V, Minecraft or the Sims. Our knowledge of the real world allows us forming expectations from the game which help and guide us in playing the game ––– as opposed to games like blackjack and chess, and to a greater extent than games like Starcraft and Sim City. At the same time, we are aware that the game as a simulation of reality is very limited, so that both what we can do as a human character and how the game world would react to our actions will deviate from what humans and do and the way the “world” would typically respond to certain actions. With no manual at hand nor the experience of seeing other play the game before us, the totality of our expectations of the game would be made from our experience in the real world, our experience within the game world, and no less importantly, our familiarity with other games in the genre. I still remember a moment of exaltation while playing the game Deus Ex (2000), where a certain cove revealed itself hidden behind a bookcase, or something like that. To me this kind of interaction with furniture was beyond anything I had ever experienced in, and therefore expected from, a first-person shooter video game. Imagine that after playing a video game for a long while you suddenly discover that as your character you could do something, perhaps even something trivial, that a real human being obviously can do but which you had not expected the game designers to implement in the game, nor had you ever been able to do such a thing in other similar video games. Or that the game world reacts to your character’s action in a way that is verisimilar but breaking your expectation of simplicity of the game. In some sense, I’d say, the real world is like this manual-less video game only we as humans don’t have an external nor parallel reference for our existence, and therefore when the ancient human being repeatedly smashed a stone against another and got a sharp stone with which she could sharpen a log she discovered something truly wonderful. ↩
-
I think it’s wrong to think of “obsessive-compulsive disorder” as a single malady. Rather, I think it’s a collection of behaviours executed by a “maladaptive” brain, grouped together only through a perceived similarity by outside observers. “Fine tuning” the brain –– evolutionary speaking –– is a very subtle task. I think that people with OCD “simply” have one specific natural drive being overly-amplified, overshooting what is useful in their actual circumstances.
Another option Is that the “drive amplitude” is not an intricate matter, but rather that in humans some sort of “higher” brain function/module has been developed which down regulates / inhibits/ controls these drives, and it is this component which “malfunctions” in cases of people with OCD.
Admittedly, I have not read much literature about OCD, and so my thoughts about it are based solely on guesswork. ↩ -
Presumably this kind of bootstrapped learning can be extended to many behaviours. With some needs a motor instinct is directly related to satisfying the need, such as urination and the need to vacate a full bladder. With many other needs, however, one can decouple the signalling of a need from the motor instinct that might be used to alleviate it. One can presume that in a human baby the system that “generates hunger”, that is, creates its painful signal, is separate from the baby’s instinct to put things in its mouth. Indeed, since both originate in the brain, at the resolution we are inspecting these, they must be “separate”. However, at some point the individual puts two together and learns that eating alleviates hunger. Perhaps more than helping the baby to learn the benefit of eating by itself, the instinct to put everything in one’s mouth is assisting its parents to feed it. While it might be regarded as mostly disruptive, since parents need to constantly monitor the baby’s behaviour and prevent it from swallowing lego bricks and other non-food items, but this is because the fact that human would eat food seems so natural. How much more difficult it would have been to feed a child if it did not instinctively tried to swallow what was presented to its face? ↩
-
A Minimal Visual Restriction Experiment: Preventing Chicks from Seeing their Feet Affects Later Responses to Mealworms, Josh Wallman, 1979. ↩
-
Using the word “objects” would have already implied a certain severability of some things inside the environment and therefore their potential to be use independently from the environment (a stone that is part of the landscape, a branch that is part of a tree). This understanding alone would have been alone a great insight if unassisted by a motive to take things in one’s hands. ↩
-
This is of course not completely true. Genes can be also propagated by taking care of kin ––– that is, kin that reproduces itself. Any genes that prefer the wellbeing of kin over one’s own reproduction would not be, on average, propagated down the generations. Only kin that reproduces would propagate shared genes, and “by definition” this kin doesn’t share these “not assuming one’s own future children/ preference for kin’s children” genes. Therefore, any genes not assuming future pregnancy are not adaptable and would go extinct.
This is a somewhat simplistic presentation, but, I believe, generally true. ↩ -
That is, instead of requiring exertion as a prompt for muscle growth as with many (all?) skeletal muscles, the body could grow and maintain these muscles, prompted by pregnancy, in the same way that, for example, it generates a special hunger that is prompted by pregnancy, such as a hunger for chalk (lots of calcium). ↩
-
It’s also possible that a spontaneous pelvic muscles growing would have evolved in parallel to the still extant clitoris, which would have allowed, as a next step, the disappearance of the clitoris. Either there are reasons why this alternative mechanism is not adaptable, or this coincidence, this evolution, has simply not yet happened.
I’d also imagine that since the genitalia are rather essential they are probably “well situated in the genome” such that it’s “not easy” to remove the clitoris without removing the penis at the same time. ↩ -
The Exultant Ark: A Pictorial Tour of Animal Pleasure, Jonathan Peter Balcombe, 2011, pp. 87-88. ↩
-
There were several studies that investigated the composition of this “squirt”. Before the commencement of elicitation of an orgasm, the ladies would urinate to vacuate their bladders. It was found that during the stimulation the bladder would fill up, and an analysis of the fluids expelled during the orgasm found that it is “diluted urine” from the bladder mixed (sometimes) with a secretion from the female prostate, a secretion similar in composition to that of the male prostate during ejaculation. Some therefore interpret this phenomenon to be “urinal incontinence”, as if it was a “defect” of the system rather than a feature. Urology is not a field I ever had much interest in, so my knowledge there is very limited, but to me this conclusion seems false due to two properties of this “squirt”: speed and composition. Since I have no hypothesis regarding what the role of squirting might be ––– rather than that it might have to do with child delivery ––– I didn’t care to dive into the numbers and look carefully at the rate in which the bladder usually fills up and the volume of the squirt divided by the time between pre-orgasm urination and the squirting. However, it seems like the numbers are such that without any calculation it is clear that this stimulation-triggered bladder-filling is abnormally high if we compare it to the filling associated with the normal function of urine disposal of the bladder. Urinal incontinence has to do with poor control of the bladder as it fills up during this kind of normal function. It fills up because the body has excess water and dissolved waste material that it needs to get rid of. The bladder filling up rapidly in response to some external input is clearly something completely different.
Second, the labelling of the squirt as “dilute urine” by itself suggests that it is “not urine”, but on top of that its differentiated composition suggests that it is, indeed, not urine. If a person drinks a lot of water prior to urinating, it’s reasonable to expect that the next urination would be rather dilute. While it is possible that some of the women in the study drank a lot of water before the experiment, it is unlikely that all of them did so. Certainly they were not instructed to do so (it was not mentioned in the study) ––– they did not come to give a urine sample, but they vacuated the bladder before the experiment in order to enable a clearer separation of whatever it is that a “squirt” is, and urine is. If we suppose that some or perhaps most of the women did not drink particularly a lot of water before the experiment, presumably also not during the experiment, then we cannot expect their proper urine to be more dilute the second time they pee (which is what these investigations suggest they are really doing when they are squirting). Urine concentration is, of course, inversely correlated with excess water in the body. If that excess of water did not exist before/ when they urinated prior to the sexual stimulation, where from did it suddenly arrive? It is clear from this perspective, therefore, that squirting is not urination (voluntary or otherwise), that is, functioning (or malfunctioning) in order to expel excess water and waste any more than crying or salivating is. In addition, the fact that its composition is not merely similar to “diluted urine” but also composed of prostate secretions ––which, as I understand it, is secreted directly into the bladder, so that it is not a mere “diluted urine” washing down a coincidental stimulation-trigged secretion ––– suggests, again, that it is “not urine”. Just to be clear on this point, when I say that it is not urine I mean that it is distinct functionally and, therefore, as an evolutionary adaptation, in the same way that rubbing alcohol and vodka are “not the same” even though one is a diluted version of the other. ↩ -
It doesn’t seem like orgasms are necessary to motivate copulation; many women do not ever experience orgasms during sex, far more do not experience them consistently, and yet they still want sex. On the other hand, some (many?) asexual people masturbate and are capable of reaching and enjoying an orgasm, are romantically attracted to other people, but have no desire to have sex with other people.
Motivation for sex is intrinsic. People become horny and so they seek sex. Having sex is pleasurable, rewarding. To reiterate the beginning of this essay: how do we know when something is pleasurable? When we want it to continue. “Don’t stop” is a phrase uttered pre-climatically, not during an orgasm. Orgasms, to the contrary, are great demotivators of sex-seeking. They might have a pleasant, soothing and relaxing effect, but they also mark the end pleasure from sex –– especially in men –– and reduce the need for more of it. This kind of dynamics is not restricted to sexual activity. When we are hungry what we want is to be satiated; however, what is pleasurable is eating. Once we are satiated eating does not further please us, and if someone or something makes us eat more it would be done with displeasure. It might be pleasing and relaxing to be satiated –– certainly more than being hungry ––– but all in all it is a state of neutrality towards food. Eating is something we stop being concerned about until we are hungry again.
Is this merely playing with words? Consider the scenario where I am offered some sort of job to do, for which I’d be remunerated X amount of icecream per amount of work time. Now, let’s say that I crave the icecream but I also have a general idea of how much I’d like to eat, and it being a very hot day where I have no cold container to store icecream any surplus would go to waste, or, worse, make a mess which I’ll need to clean. Wanting the icecream I have motivation to work, to do a job that is inherently unpleasant. Once I get enough icecream I lose the motivation to work any further, and I stop. So is the icecream a motivator or a demotivator? It cannot be both.
There are two ways to look at it, in the context of our discussion, as far as I see.- The definitions on wiktionary of “motivation” right now include “Willingness of action especially in behavior”, “Something which motivates”, and “An incentive or reason for doing something”.
This might sound a little smart-assy, but what motivated me to work is not the icecream but the promise of having it. More exactly, what motivated me was the icecream-offerer and not the icecream, which might not have even existed. If the offerer refused to give me the icecream at the end I’d have been angry and felt cheated, but the work I had already done.
If instead of being offered icecream I decided to “cook” icecream from scratch, my motivation would have been the expectation of having icecream at the end of the process, which, because I did it for the first time and without proper instructions, might well not happen, leaving me frustrated.
So, again, is icecream the motivator or the demotivator? Of course when we answer that the motivation is “icecream” it is shorthand for “the promise of having it”, but the fact that it is a “promise” or “expectation” is a crucial component. This comes out not only in theory; there are the people who would speak about how welfare benefits demotivate unemployed people from seeking a job. That would be akin to icecream being freely available, which would certainly make me less likely to do a nasty job for it or cook it if I derive no pleasure from the process. To paraphrase, the combination of (1) wanting the icecream and (2) a conditional promise of it, motivates me to work (fulfill the condition). - A key difference between an orgasm and the icecream is that while in my example the work (or cooking) was taken up despite being unpleasant (for the sake of the promised icecream), in the case of sex the “work” –– coitus itself –– is inherently pleasurable. Being self-aware and so on humans might be confused about it, but we are not having sex to have orgasms, but because it itself feels good (or due to ulterior motives, such as wanting to achieve pregnancy), and it doesn’t feel good because there’s an orgasm “at the end”. It seems obvious, or so it is to me, when I consider how in the case of the icecream I would have wished the work to finish as fast as possible so I could get the icecream and stop working, while in sex the opposite feeling reigns which wishes that the orgasm wouldn’t come and that it would keep going on “forever” (though that pleasure, of course, has its limits); I’d say that a lot of human activities in bed, indeed maybe all of erotica, are meant to keep one or one’s partner from reaching a climax all the while the involved dally within the sexually aroused state. The pleasure is gained within this tension, where one is clearly heading towards a climax but has not arrived quite yet. It’s therefore no surprise that both erectile dysfunction and premature ejaculation, in a way opposite phenomena, are considered problems. Nonetheless, sex as an activity happens in the context of other needs and therefore the inability or difficulty to achieve an orgasm is a problem too, rendering sex excessively prolonged to the point of making the practitioners tired, bored or physically irritated.
My motivation to work came from it being the condition for me to receive something that I wanted, the icecream, which I expected to derive pleasure from. In the other case, the pleasure from sex was conditional on there not being an orgasm (yet); the pleasure was a given and provided an intrinsic motivation to engage in this activity. In both cases what comes at the end terminates the motivation for the activity preceding it, but in different ways. The icecream achieved it because it was that which was sought by the activity, while the orgasm did it by “turning off” the pleasure derived from the activity (or at least in men). Perhaps this is what makes one a motivator and the other a demotivator? One makes us do what we otherwise wouldn’t want to (motivator) while the other makes us stop doing what we would otherwise want to (demotivator).
Now, it’s worth saying that while I’m trying to put some conceptual order through language, things might be less simple than presented. One reason why the previous example was not such an adequate analogy –– and you might have felt it –– is that while the icecream relates to an intrinsic property of mine –– my delight at icecream –– its reception was external, but the orgasm and its effect on me is wholly intrinsic, even if prompted by something external. The condition of reception of the orgasm at the end of the sexual activity is internal –– the way the biology of human beings is set –– while the condition of the reception of the icecream at the end of work is external ––– the promise of another human being or my success of cooking icecream.
It surely does seem at times that people are motivated to orgasm. I mentioned that it might be humans’ self awareness that makes them feel that this is the case, but, what can we do, humans are self-aware and I don’t want to simply hand wave this away. Let’s assume that people do engage in sex, motivated by the promise of having an orgasm, but also that what I have written above is true. Does it follow, then, that people are motivated to achieve something that would kill a source of pleasure? Well, yes and no.
First, the no. As said two paragraphs up, the orgasm is conditional on the activity preceding it, so this hedonic-termination is conditioned on first experiencing pleasure, it’s a consequence of it. Potentially one could have sex and stop before achieving an orgasm (as I think some traditions call for), but probably then the desire (discussed further next) is even greater than before the initiation of sex, so nothing really is gained by stopping other than some temporary pleasantness, and potentially the practice of restraint.
Second, the yes. Sources of pleasure could and do become sources of pain when we hold them in our mind while they are not immediately available. Their hold of our mind comes either internally, for example the hunger for food or sexual desire, or when we become aware of something good that we could obtain were we or somebody else to do (or avoid doing) something but which still remains in potentiality (and the pain continuous if the potentiality ends in frustration, i.e. the boon was not achieved). The wish to orgasm, then, becomes the wish to be freed from the pain of yearning.
With some things, pain and pleasure are clearly separate and could exist without the other. For example, being beaten up is clearly painful, but not being beaten up is not especially pleasant, it’s merely neutral. I suppose one would not get elated for not being beaten-up unless one had been beaten up every day and suddenly, one day, one wasn’t. But even then the pleasure would come from a mirth derived from a relief that something bad that was supposed to happen didn’t happen. If one was greatly distracted at that day by something else then this absence would go unnoted without any additional positive effects. On the other hand, a good massage is clearly pleasant, but most of us would not suffer for not receiving one (unless we suffer for back pain and feel like massage would be the only succor).
But what about food and hunger, or sex and sexual desire? Do we enjoy food because it diminishes the pain of hunger? Or perhaps the pain comes from hunger while the pleasure comes from an exquisite taste? One can imagine eating something with resignation, no pleasure derived, nothing else being available. On the other hand, people do eat things that are tasty despite not being hungry, and that iswhich turns people obese. But then again, the satisfaction of hunger can be positively pleasurable; they say that hunger is the best sauce, and indeed after a day and a half of not eating, the plainest food would be relished, just like plain cool water after a long or intensive physical exercise would. Perhaps in this context one can make a meaningful distinction between that which is pleasant, quenching and satisfying. If we ordered sushi but the delivery-person got lost on the way and we never get it and eat bread instead, our meal might pleasant, if the bread was more or less tasty, but not satisfactory since our hopes for how to satiate our hunger were frustrated. If the bread was enough then the meal would be quenching, while if we stayed hungry afterwards it wouldn’t. I wonder if anything could be satisfying while not being pleasant ––– we got what we want but we didn’t get the pleasure we expected from it? I suppose that still we would be frustrated (not satisfied) albeit now it would be due to our misjudgment rather than due to unfavourable external circumstances. ↩
Perhaps the situation is analogous with sex drive and sex activity, perhaps not quite. Either way I think ––– I hope ––– that I established that that which motivates people to engage in sex is not the orgasm. - The definitions on wiktionary of “motivation” right now include “Willingness of action especially in behavior”, “Something which motivates”, and “An incentive or reason for doing something”.
-
No direct relationship between human female orgasm rate and number of offspring, Brendan P. Zietsch and Pekka Santtila, 2013.
As they themselves note in the discussion section, modern family planning reduces any correlation between orgasm frequency and number of children had there been one. Nonetheless, given that fertility problems are not uncommon, one would except “border-cases” where extra-fertility arising from female orgasms would have made the difference between successful and unsuccessful conception, but nothing of that sort shows. In addition, unplanned pregnancies are also quite common, and here one would really expect orgasms to make a difference if they really evolved as a method of female selection. According to a survey done in 2015, two years after Zietsch and Santtila’s research was published, of English speaking Australians, 26% of the last pregnancies were unintended. Of the unwanted pregnancies, 8% resulted in births, making unwanted births 0.5% of all pregnancies; the rate of living births of wanted pregnancies –– 15% of which were aborted –– was not given, but presumably it was lower than the remaining 85%, making unwanted births of a higher proportion among all living births. However, 72% of wanted but unintended (“untimely”) pregnancies ended in a live birth. Given that more-frequently orgasming women did not have children earlier, as one would except given the phenomenon of untimely births and if orgasms were assisting fertility, it seems that overall, indeed, female orgasms do not function to assist conception. ↩ -
This is slightly tangential, but I was always struck at the seeming equanimity in which other animals give birth. I’m a believer in the Aquatic Ape theory (which probably deserved another post of its own), and the greater calmness of water births seems to support it as well. From Christianity we know that it is easier for a mother to give birth than for a rich man to reach heaven, but the bible’s promise that woman would suffer giving birth seems to me a result of the Israelites having done it all wrong. Other civilizations, anyhow, had practiced water birth thousands of years ago. ↩
-
It would have “sufficed” if this stimulation only appeared when women lactate. However, as with “tool use”, evolutionary adaptation works through “discovery” rather than “invention” (no intelligent design here, either). As long as the erogenousity of the nipples serves to propagate the responsible genes, and as long their erogenousity during non-lactating periods is not adverse to the fitness of these individuals, or at least not detrimental, then it could “survive” in occasions when it is not necessarily “increasing fitness”.
Same thing can be said about the erogenousity of the male nipples, or, even, the existence of nipples itself in human (or any mammal) males. ↩ -
Or this is what I managed to gather from Tristan Taormino. She mentioned that she greatly prefers anal over vaginal sex, which is to puzzle if the pleasure stems from the stimulation of the G-spot which is located inside the vagina. However, it might be that the different way/ angle in which it’s being stimulated facilitates greater pleasure, but perhaps some other factors come in. ↩
-
Except Icelanders, perhaps, whose insular population seems to have adapted to the gloominess accompanying them much of the year.
The Prevalence of Seasonal Affective Disorder Is Low Among Descendants of Icelandic Emigrants in Canada, Andrés Magnússon and Jóhann Axelsson, 1993. ↩ -
Give Your Ideas Some Legs: The Positive Effect of Walkingon Creative Thinking, Marily Oppezzo and Daniel L. Schwartz, 2014. ↩ ↩2
-
Executive Control Deficits as a Prodrome to Falls in Healthy Older Adults: A Prospective Study Linking Thinking, Walking, and Falling, Talia Herman et al, 2010. ↩
-
When does walking alter thinking? Age and task associated findings, Jennifer M. Syrgley et al, 2008. ↩
-
One might wonder about the relationship between physical exercise and health benefits, all sorts of physiological changes that occur in response to a single as well as to regular exercise. The benefits are, of course, not directly conferred by the exercise per se, but are a self-regulatory response of the body to exercise. Why would we evolve in such a way instead of, for example, have those processes occur independently of exercise?
One possibility, I think, is that it is not about whether exercise occurs or not, but about its timing. Our current human culture is at an extreme point of sedentariness. Probably even most people who nowadays exercise regularly (but whose job is not “active”) don’t move around as much as the average human did 30,000 years ago. That humans would stand, walk and run for great parts of the day was a given. Perhaps the exercise-induced physiological changes function most efficiently after a person exerted himself, for whatever physiological changes (temperature, blood –sugar levels, hormone levels &c); that is, had they occurred at any other time they would not be as effective. That is, perhaps if humans were sedentary, sitting almost motionless most of the day, day in day out, for thousands of years, these physiological activities would have evolved to occur throughout the day or at some random timing, simply because there was not a better time for them to occur. But since humans did exert themselves there was a sweet-spot for these activities and so they evolve as a reaction ––to physical activity–– and therefore, in our lazy modern days culture, people who are not active do not trigger these activities. To use a silly analogy, imagine a blacksmith at a time where fire has not been invented. She would try to work the hard iron from dawn to dusk, working very hard and not very productively. Now imagine someone invented a forge, but it’s some lala-forge which turns on and off, depending on some unknown factors such as the weather, barometric pressure and the mood of the gods. Nonetheless, almost every day the forge would ignite. The blacksmith, who is not foolish, would schedule her work around the operation of the forge. Whenever it would ignite she would hit the iron bars and then would work then, so much easier when they are hot and bendable. When the forge turned off and the bars cooled down she would hang down her apron and go attend to other matters, such as the family garden, the taking care of the children, the mending of furniture and so on, or simply relax. On days the forge didn’t ignite she would simply not work, knowing that it would in the next days. She would have some mechanism of bells attached to a string that when burned by the spontaneously igniting forge would set off and call her to work. And this is the kind of blacksmith, I suspect, we internally evolved. Nowadays for many people the forge only rarely if ever ignites, but the blacksmith adheres to the age long blacksmith tradition and would not work cold, hard, iron.
However, there are also exercise-induced changes that I think operate on different “premises”. Apparently, based on research, there are health beneficial changes that are correlated with exercise with “exercise quantity” differentiable effects. That is, it is not merely necessary for some minimal activity for the physiological processes behind them to start off, but the benefit “continues accumulating” as one’s exercise is longer in time or higher in intensity. In these cases it is unlikely to be merely “timing” that is important, if at all. I think two things are behind those.
One, I think these changes are not “absolutely beneficial”, but are optimized to the environment and the requirements it makes on the individual. Such optimization could be behind the up- and down-regulation of skeletal muscles size in accordance to the demands put on them. It would be beneficial to increase the size of a skeletal muscle if it’s exerted, but it would likewise be beneficial, from a resource conservation point of view, to decrease its size if the extra mass is not needed.
Two, it is possible that some of the beneficial changes are either directly caused by the exertion, or are dependent on them via some other mechanisms. While “the more (exercise) the better” might be true, as far as this better goes to potential increased life expectancy, it might be evolutionary irrelevant if it doesn’t affect one’s or one’s offspring’s reproductive success, for example if violence or contagious diseases tended to cap one’s lifespan at some lower bound anyway. ↩ -
Attracting Interest: Dynamic Displays of Proceptivity Increase the Attractiveness of Men and Women, Andrew P. Clark, 2008.
There are some issues with the design of the experiment, which Clark is aware of, mainly its size (number of actors), but overall the findings seem convincing on the main points. ↩ -
Women’s visual attention to variation in men’s dance quality](/docs/aesthetics/2012-Weege.pdf), Bettina Weege, Benjamin P. Lange and Bernhard Fink, 2012.
[Optimal asymmetry and other motion parameters that characterise high-quality female dance, Kristofor McCarty, Hannah Darwin, Piers L. Cornelissen, Tamsin K. Saxton,Martin J. Tovée, Nick Caplan and Nick Neave, 2017. ↩ -
I suggest it only apropos and rather tentatively. I have noticed that whenever I feed anything to a container, for example, use a spoon to get rice or granola or tea into a container with a narrow opening, my mouth would instinctively open. My suspicion has been that perhaps it’s a reflex that evolved to facilitate feeding of toddlers. Since toddlers often imitate the faces they are looking at, opening one’s mouth is a good attempt at making the toddler open their mouth, and so such a reflex could be adaptive.
You know how in caricatures of people trying to thread a needle they have their mouth closed over their tongue? I’ve thought that maybe it’s a conscious effort to suppress the reflex. ↩ -
A woman’s walk: Attractiveness in Motion, James F. Doyle, 2009.
I’m citing this paper mostly for its citations, general discussion of the topic and the ideas behind the experiment and less so far the “experiment itself”. Also, I’ll just add that it seems to me very odd, that is, wrong, to call something that is within normal human expression a “superstimulus”. ↩ -
Really I couldn’t set my hands upon that paper, “Primal Screech”, Blake R. Margins, 1986, but it was referenced in Scraping sounds and disgusting noises, Trevor J. Cox, 2008. ↩
-
As in “gene vs. environment” studies. By the way, in these, of course, what is important is not the environment per se but its interace with the individual. I’d say I was surprised that it took so long for “the academia” to realize that they were taking the word too literaly, but I suspect that laziness and a lack of scientific integrity picked up where ignorance had left off in order to prolong this for a good number of decades. ↩
-
Written in 177 B.C., the tablet was the work of a Babylonian scribe copying from an earlier document. As [Assyrian philologist Irving Finkel] translated the bewildering blend of Babylonian and Sumerian words, he began to realize it was a treatise on the Royal Game of Ur. The author speculated on the astronomical significance of the 12 squares at the center of the 20-square board and explained how certain squares portended good fortune: one square would bring “fine beer”; another would make a player “powerful like a lion.”
From Big Game Hunter, William Green, Time Magazine, 2008.
According to Finkel, delivered in speech at 2:04:This is a clay tablet, written in cuneiform writing as you obviously know. This was written in the second century BC, so miles afterwards [after the discovered board was made, about 26th century BC], by a Babylonian astronomer whose best friends were Greek astronomers. And he wrote this tablet, including giving the rules of how to play it, in his day. All that time afterwards.
It is hard to assess from an astrologer’s astrological interpretation alone how widespread such notions were. Superstition spreads easily, but something like an attribution of effects of individual squares, which are not decorated accordingly, at least not on the board displayed by Finkel, is a complex information that would not spread around so easily. ↩