'Sentience' vs 'Sapience' in SF

Discussion in 'Science Fiction' started by Boreas, Oct 12, 2017.

  1. Boreas

    Boreas n log(log n) Staff Member

    I don't have many pet peeves with regards to SF literature, but this one really annoys me: when SF writers who should know better seem to conflate 'sentience' for 'sapience'. Most do this. The only exception I've noticed is David Brin, and he even coined his own word in the Uplift books (I think it's his word?) to distinguish at least human-level self-awareness, cognitive abilities and judgement: sophont.

    I've asked this question elsewhere and I've been told by some I'm being pedantic or a grammar Nazi. But I don't think either case is true: 1) it's not a matter of grammar, and 2) I think the difference between these two words is quite important, and doubly so in SF where the rise of self-awareness and high level cognitive abilities is a staple of the genre.

    Most creatures with a nervous system are sentient. They are conscious and able to perceive the environment through their physical senses and react accordingly. And they also have subjective experience such as pain, pleasure, comfort, etc. But only humans are sapient: not just self-aware, but imbued with the kind of cognitive ability to make conscious choices to reach some desired end state, for the good of himself or for a greater number or for a specific purpose, all of which might be called wisdom or sound judgement. Or I should qualify and say fully sapient, because some other creatures such as dolphins, chimpanzees, the great apes are known to exhibit some varying degrees of self-awareness.

    What do you think? Is this really a pedantic point? Don't you find it strange that many SF authors confuse or misuse these terms?
     
  2. Diziet Sma

    Diziet Sma Administrator Staff Member

    Well, arguing you are picky grammatical speaking shows how little grammatical understanding they have.

    I find the two concepts to be different.
    All sapiences are sentience beings but not all sentience beings (if any) are sapiences.
    You can teach, by repetition, a few tricks to a monkey. However, this does not mean the monkey can extrapolate from this learning experience and apply this learned behavior progressively. Sentience creatures lack free will as they are moved by pre-established behavioural patterns (with little room for changes) Sapiences, on the other hand, enjoy free will. Well, as long as you don't follow Spinoza's definition...

    Note: can one use the word Sapience as a noun or is it an adjective?
     
  3. Tiran

    Tiran Well-Known Member

    I understand where the line is, but there isn't a single computer system that is "sentient" yet, let alone "sapient". Sentience would be an enormous step, and probably the more momentous one. So maybe that's where that focus came from.
     
  4. Safari Bob

    Safari Bob Well-Known Member

    Hmm... how would categorize old, married men?
    or
    :D
     
    Diziet Sma likes this.
  5. kenubrion

    kenubrion Well-Known Member

    I was thinking about this as I mowed my lawn yesterday for some reason. Per Boreas' definitions, my puppy over there, Sofie, knows what the mowing is since she's been watching for nine years, and rolling in the fresh cut grass behind me. That's sentience. But she doesn't like it since it takes me away from throwing balls to her. That's sapience. How did I do?
     
    Diziet Sma likes this.
  6. Boreas

    Boreas n log(log n) Staff Member

    The adjective forms would be 'sentient' and 'sapient'.
    Well, crows can problem solve on their own without being taught or shown and remember the lesson. They showcase a high degree of intelligence, but they aren't self-aware or sapient.
    Don't you think that some computers or programmes are already sentient to a limited degree? Would a self-driving automobile be considered sentient? It can perceive the environment and make changes to its behaviour accordingly. What about sophisticated air conditioners that have sensors to detect changes in temperature and modulate their own functions accordingly? Granted, they won't undergo any kind of subjective experience like a biological organism, but it still feels like sentience albeit of a very limited kind. But in SF, I think many of the instances where sentience has been used really was meant to denote awareness capable of reflection and cognition, not simply the feeling of sensations.
    Heh. I guess it would depend on the wife which category the old married man might fall under.
    kenubrion, you really are the puppy yourself, aren't you? This is turning out to be like Simak's City.
     
  7. Tiran

    Tiran Well-Known Member

    Well, no. You're using a very liberal idea of "feeling sensation". If I poke a rock with a stick, the rock receives vibration, the rock leans away from the stick and then "reacts" by leaning forward again when the stick is pulled away. Is that sentience?

    The key to the definition isn't the sensation part but the subjective part, which is "feeling". Feeling isn't the existence of nerve impulses but the interpretation of those impulses into an experience - a running interpretative simulation of what the world outside the thinker is that is separate from the thinker itself. Complex interactions with the outside world are bundled into self referencing packets like emotions - things that the simplest animals or logic machines don't have or need to make reactive decisions to environmental stimuli.

    It is easy to write off "feeling" as just a complex decision making circuit, and maybe it is. But sentience really is a decision making logic system that is so complex that it understands that its being is itself a factor in its own environment and must be part of most every decision. Nothing we've constructed has that kind of self referral. And I think the only way you can really get there is if cognitive systems need to make decisions where the simulated mind state of other players is a factor. Sentience comes from some degree of sociability - whether that is just mating or swarm behavior vs. more complex cooperation I couldn't say. I doubt a solitary mind could evolve to sentience, though it may display some of the 'wisdom' of sapience through complexity and experience.

    I would argue sapience is when the decision making steps beyond the complex wants of an emotional decision maker to one that makes choices that are in conflict with emotion. That's the "wisdom" in sapience.
     
  8. Diziet Sma

    Diziet Sma Administrator Staff Member

    Well, I don’t know the first thing about crows but I doubt I will find an evolutionary pattern worth mentioning about these birds. That is, even though crows might be cleverer than many other creatures, they haven’t been able to remove themselves from the animal category by using their intelligence. They don’t seem to be able to extrapolate their learning experience with their superior intelligence in a way that they don’t follow a crow expected behaviour. Show a crow a shiny and I think we can all agree what their response might be.

    I like the point Tiran made:
    To me, what constitutes the main difference is down to free will, to acting away from purely instinctive behaviour.


    In Applied Behaviour Analysis, humans tend to follow the ABC model, Antecedent, Behaviour and Consequence. A rather simplistic example would be:
    I want a cookie (A), I throw a tantrum (B), mammy gives me the cookie (C)
    This behaviour can be altered though. I want a cookie (A), I throw a tantrum (B), I get ignored and no cookie (C). Therefore, next time I will ask nicely (B) and get my cookie (C)
    I would personally think of this when labelling between sentience and sapience.

    Now regarding, for example, the Minds in Banks’ Culture books, I think they can be easily thought as sapience. However, other AIs in SF fiction, would they be sapience or a highly evolved sentience? I'm thinking of the artificial humans in Do Androids Dream of Electric Sheep?
     
  9. Tiran

    Tiran Well-Known Member

    The problem with this is that dogs also follow this pattern. Dogs will pretend guilt to show contrition, for instance, and hide bones to put off immediate satisfaction for future reward. I would argue it isn't a decision making paradigm that rules sapience, but a level of complexity and abstraction the thoughts take.

    Sapience likely occurs when an organism has the mental capacity to pursue a line of reasoning that has no future reward apparent, but is pursued for its own satisfaction. We developed this capacity because it randomly produces incredible technological leaps - which is pretty much what human evolution has consisted of since before the stone age.

    It can also be argued that many people are not "sapient", and other people are not "conscious". Human beings can get along fine without developing abstract reasoning processes or self awareness. That's the miracle of socialized living - everyone can come along on the ride.
     
  10. Boreas

    Boreas n log(log n) Staff Member

    No, your example seems just to be a demonstration of Newtonian mechanics at play. But I think I was being liberal in my interpretation of 'sentience' with the self-driving automobile example. Perhaps the software that dictates how a car should respond to obstacles or when it comes across any situation that meets a pre-programmed set of parameters is more akin to biological, evolutionary instinct (but w/o the evolutionary aspect) rather than a very limited form of sentience. I guess my using the example was my thinking that evolutionary instinct (which are inherited biological programmes) automatically equates to sentience.

    With regards to subjectively 'feeling', and in the specific case of silicon-based artificial intelligences, is there even a way to recognise this? How could an AI actually feel sensations that could translate to pain or even a sense of well-being? Could any sophisticated programme/software that takes measures to protect itself from threats, or replicate itself in order to ensure continuity, be an indication of subjectively feeling? I think, at least, that these programmes would definitely be responding to stimuli if they were doing so. But could the fact that some software on the cusp of possibly being AI that takes such a course be an indication that it feels some measure of sensation such as anxiousness/tension for its survival?

    I don't know the first thing about the field of artificial intelligence, but my layman's guess is that it would be impossible to tell. In 2001: A Space Odyssey, I always took HAL to be an extremely sophisticated AI that was in some regards sentient but not truly sapient. It was only in Clarke's follow-up that you get a glimmer that perhaps HAL might have been on the very edge of true, full-fledged, human-level self-awareness when it asks Dr. Chandra, "Will I dream?"

    But is it really an understanding or merely instinct or inherited traits through the evolutionary process? Or perhaps both these qualities are the same for all intents and purposes - that the fact the entity is conscious is an automatic indication of its being sentient. And I don't see why the "simulated mind state of other players" must be the primary factor. Sentient beings respond perfectly well to purely environmental stimuli such as weather patterns, and its likely the way sentience first arose.

    Yes, I definitely agree. A large part of that is being able to take the long view. To put off immediate gratification for long-term gains, and especially for an abstract purpose that doesn't necessarily involve the needs of the individual making such a decision.

    I think the point is not the fact that many individuals of the species do not live up to their full potential of sapience (I'd say the vast majority of people live their lives without much foresight or sound judgement or even self-awareness with respect to their behavioural patterns), but the fact that we all as a species possess the capacity for it.
     
  11. Boreas

    Boreas n log(log n) Staff Member

    Maybe the reason many SF authors use 'sentience' when they really mean 'sapience' is to avoid such subtleties as are being pointed out here? Seems lazy, especially in a genre that is touted as philosophical fiction and with such strong epistemological roots. So, either the writers are wilfully taking the easy way out, or are themselves confused by the distinction? Then why does David Brin never conflate the terms? I think there must be other such conscientious authors, but I can't recall any specific names.

    And it's not just the misuse for AI, where I think I can partially understand because as Tiran says, attaining sentience is as important a step as attaining sapience, although there is still a difference. But they also use it to mean self-aware biological beings capable of abstract thought, which irks me even more.
     
  12. Tiran

    Tiran Well-Known Member

    Someone could certainly make a counterargument, but "awareness" of self separate from other entities is required to have "feelings". A creature that doesn't have to make allowances for the cognition of its fellows is completely unlikely to develop its own sense of self. "Self awareness" - the ability to recognize feelings as being distinct from sensory input - is something that arises from a mind that is capable of modeling behavior. Our consciousness is essentially a real time model of our own cognition - which is the reason that our self aware minds are largely so unaware of most of what our brains are up to - the model is just an approximation running inside the machine that created it.

    I don't think pain or reproduction are feelings any more than seeing orange or having to pee are. Those are just sensory inputs, not feelings. Sadness, interest, boredom, excitement are feelings - complex sets of non-quantitative thought that aren't simple reactions to sensory information but are internal "behavior" unto themselves. Emotions could be said to pre-loaded programs that light up large areas of the brain based on little to no external input. Excitement, for instance, is a feeling that causes us to increase our focus and engage in more planning than non-exciting cognitive events. Measuring excitement in a machine might be relatively simple if we observe a great deal of computation happening after relatively little input. "Emergent behavior".

    Another measure would be cognitive processes that give rise to complex external events or constructs that aren't "organic" to the entity. Reproduction, for instance, is a completely opaque non-conscious process that we not only don't need higher brain functions for, we aren't even aware of any part of the process. But agriculture isn't organic to ourselves - it is something that we have to abstract to make it happen in the world. An AI might also engage in efforts that aren't natural extensions of its electronic being. That would be another clue that the AI is sentient.

    This supposes that a tree, losing its leaves in fall, is sentient. We can call the tree's reaction to changing light and temperature as "feelings", but then any reaction to any event is a feeling under that paradigm. An analogue guided torpedo also has no feelings, even though it "observes" the target and adjust its direction like a paparazzi chasing Beyonce.

    Brin writes books about monkeys and dolphins acting like homo sapiens, it shouldn't be surprising he's got sapience on the brain. But I really think sentience is a high enough bar without requiring emergent fictional intelligences to be imbued with the wisdom of sapience as well.
     
    Last edited: Oct 17, 2017 at 6:49 PM

Share This Page