Psychology and computer science relationship jokes

psychology and computer science relationship jokes

Sidney D'Mello, an assistant professor of psychology and computer and if I break that connection through zoning out, I suddenly have a very. Presidential candidate Jeb Bush belittled psychology degrees, saying universities computer science, biology, and cognitive science majors in their PhDs. What is the relationship between psychology & computer science? originally appeared on Quora: the place to gain and share knowledge.

In the sentence, "Yojo batted an eraser across the desk," the words play and toy do not occur. With the background knowledge that Yojo is a cat, cats are playful creatures, and an eraser is a mouse-like object, one might interpret the action as playing and the eraser as a toy. This issue is not limited to natural language understanding, since interpreting a movie or a photograph would require the same kind of analysis. Such interpretations are necessary for reporting and classifying any observations.

Luria wrote a book about Shereshevskii, a man with a phenomenal memory for the exact words and images he observed. Because of his memory, Shereshevskii got a job as a newspaper reporter, but he was totally unsuited.

His memory for detail was perfect, but he could not interpret the detail, determine its relevance, or produce a meaningful summary. In effect, Shereshevskii behaved like a feature-based perceiver attached to a vast database: Every classifier, logical or fuzzy, feature-based or structural, depends on some model for relating instances to categories. Figure 1, for example, illustrates the underlying model for most neural nets, and Figure 2 illustrates the model for most logic-based systems.

Data models incorporate general assumptions about patterns typically found in well-behaved data; the popular bell-shaped curve, for example, is called "normal.

Breimana statistician who had designed and used a wide range of models, emphasized the need for models that accurately reflect the nature of the subject: But when a model is fit to data to draw quantitative conclusions, the conclusions are about the model's mechanism, and not about nature's mechanism. It follows that if the model is a poor emulation of nature, the conclusions may be wrong.

Approaching problems by looking for a data model imposes an a priori straight jacket that restricts the ability of statisticians to deal with a wide range of statistical problems.

The best available solution to a data problem might be a data model; then again it might be an algorithmic model. The data and the problem guide the solution. To solve a wider range of data problems, a larger set of tools is needed. Every model is an approximation that extracts a simplified, computable mechanism from the overwhelming complexity of nature. As the statistician George Box observed, "All models are wrong, but some are useful. In their original papers on chunks and frames, Newell, Simon, and Minsky tried to incorporate the full richness of the psychological models.

In later implementations, however, the word frame was applied to data structures that do little more than package a few pointers. Those packages are useful, but they don't model Selz's schematic anticipation, Bartlett's active organizations, or Wertheimer's structural laws. Minsky continued to argue for a more globally organized society of mind, and Newell proposed a unified theory of cognition called SOAR. The pioneers in AI realized from the beginning that human intelligence depends on global mechanisms, but the challenge of bridging the gap between local features and global structure has not been met.

Chess playing illustrates the importance of the Gestalt effects. In applying Selz's methods to chess, de Groot had chessplayers study positions and select a move while saying whatever came to mind, what moves or lines of play they were considering. He found no significant difference between masters and experts in the number of moves considered, depth of analysis, or time spent. The only significant difference was that the masters would usually focus on the best move at their first glance, while the nonmasters were distracted by moves that would dissipate their advantage.

Former world champion Emanuel Lasker said that chess is a highly stereotyped game. Instead of exhaustively analyzing all options, the master simply looks at the board and "sees" which moves are worth considering. After 40 years of chess programming, a computer was finally able to defeat the world champion, but only by a brute-force search.

To discover what the human could see at a glance, the computer had to analyze the details of billions of chess positions. The number of possible patterns in Go is many orders of magnitude greater than in chess, and a brute-force search cannot overcome the human advantage of "seeing" the patterns. As a result, no program today can play Go beyond a novice level. The schema or Gestalt theories appear in two different forms that are sometimes used in combination: The Gestalt psychologists emphasized image-like geometric patterns, but Selz's graph patterns have been easier to implement.

Categorization and Reasoning Categorization and reasoning are interdependent cognitive processes: As an example, consider the rule of deduction called modus ponens: If P then Q. This rule depends on the most basic technique of categorization: If the P in the assertion is not identical to the P in the premise, then a preliminary process of unification is required, which uses another technique of categorization: If the Ps are complex expressions, multiple categorization steps may be necessary. Deduction is one of Peirce's three methods of reasoning, each of which has a corresponding method of categorization.

The following examples show how each of the three methods for reasoning about propositions has a corresponding method for dealing with categories. For these examples, the words principle and fact are used as informal synonyms for proposition; a fact is considered more specialized than a principle, but either one could be arbitrarily complex.

Apply a general principle to infer some fact. Assume a general principle that explains many facts.

psychology and computer science relationship jokes

Guess a new fact that implies some given fact. In his pioneering work on symbolic logic, Boole used the same algebraic symbols for both propositions and categories. Since then, the notations have diverged, but the correspondences remain. The methods of categorization and reasoning used in AI and related branches of computer science can be grouped in three broad areas, all of which have been under development since John McCarthy coined the term artificial intelligence in Most new developments may be considered incremental improvements, even though many of the "increments" are sufficiently radical to be important breakthroughs.

Scientists tell us their favourite jokes: 'An electron and a positron walked into a bar…'

Each of the three areas is characterized by its primary form of knowledge representation: The oldest methods of both categorization and formal reasoning are based on collections of features, which may be processed by a variety of logical, statistical, and algorithmic techniques.

The features could be monadic predicates, as in Aristotle's differentiae, or they could be functions or dyadic predicates, in which the second argument or slot is a number or other value. A collection of features may be called a set, a list, a vector, a frame, or a logical conjunction. Among the feature-based methods are neural nets, decision trees, description logics, formal concept analysis, and a wide variety of vector-space methods, such as LSA.

A collection of features, by themselves, cannot distinguish "blind Venetians" from "Venetian blinds" or "Dog bites man" from "Man bites dog. Such graphs are commonly used to represent the syntax and semantics of languages, both natural and artificial. Some versions represent rigorous logic-based formalisms, and others use informal heuristics.

Among the structural methods are grammar-based parsers and pattern recognizers, constraint-satisfaction and path-finding systems, spreading-activation systems that pass messages through graphs, and graph-matching systems for finding patterns and patterns of patterns. Rules are often used to process features or structures, but in a rule-based system, the rules themselves are the knowledge representation.

The rules may be formulas in logic, or they may be less formal heuristic rules in an expert system. The more formal rule processors are called theorem provers, and the more informal ones are called inference engines. In general, the rules of a rule-based system may be used for multistep categorization in diagnostics and pattern recognition or for multistep reasoning in planning and problem solving.

Large systems are often hybrids that combine more than one of these methods. A natural-language parser, for example, may use features for the syntactic and semantic aspects of words and build a parse tree to represent the grammatical structure of a sentence.

A reasoning system may combine a T-box for terminology defined by a description logic with an A-box for assertions and rules. The results of classification may be a static category or a dynamic control structure. Decision-tree systems, for example, are often used for robots because they learn quickly and generate a decision tree that can be compiled into an efficient control mechanism. To train a robot, a teacher can take it "by the hand" and guide it through a task. At each step, the system records a vector of features that represents the current state of the robot together with the response that would take it to the next step.

After that training pass, the system builds a decision tree that determines the response for any combination of features.

psychology and computer science relationship jokes

If the robot makes a mistake on subsequent trials, the teacher can stop it, back it up to the point where the mistake occurred, and guide it through the next few steps. Then the system would modify the decision tree to prevent the robot from making the same mistake twice.

It might, however, make similar mistakes in slightly changed conditions. Since the early days Huntincremental improvements in the algorithms have enabled decision-tree systems to generalize more quickly and reduce the number of "similar mistakes". The boundaries between the three groups of AI systems are blurred because some features may be defined in terms of structures and some structures may be rule-like in their effect. As an example, a neural net for optical character recognition OCR might use a feature defined as "having an acute angle at the top.

As another example, analogical reasoning, which is based on structure mapping, can be used to derive the same kinds of conclusions as a rule-based system that uses induction to derive rules followed by deduction to apply the rules. Comparison of logical and analogical reasoning Ibn Taymiyya admitted that deduction in mathematics is certain.

But in any empirical subject, universal propositions can only be derived by induction, and induction must be guided by the same principles of evidence and relevance used in analogy. Figure 3 illustrates his argument: Deduction proceeds from a theory containing universal propositions.

psychology and computer science relationship jokes

But those propositions must have earlier been derived by induction with the same criteria used for analogy. The only difference is that induction produces a theory as intermediate result, which is then used in a subsequent process of deduction. By using analogy directly, legal reasoning dispenses with the intermediate theory and goes straight from cases to conclusion.

If the theory and the analogy are based on the same evidence, they must lead to the same conclusions. The question in Figure 3 asks for information Q about some case P.

If the question Q has one or more unknowns, as in Selz's method of schematic anticipation, the unknowns trigger a search to find the missing information. That search may take many steps, which may apply different rules of the same theory or even rules from different theories. But before any theory can be applied, it must have been derived by induction: In analogical reasoning, the question Q leads to the same schematic anticipation, but instead of triggering the if-then rules of some theory, the unknown aspects of Q lead to the cases from which a theory could have been derived.

The case that gives the best match to the given case P may be assumed as the best source of evidence for estimating the unknown aspects of Q; the other cases show possible alternatives. The closer the agreement among the alternatives for Q, the stronger the evidence for the conclusion. In effect, the process of induction creates a one-size-fits-all theory, which can be used to solve many related problems by deduction. Case-based reasoning, however, is a method of bespoke tailoring for each problem, yet the operations of stitching propositions are the same for both.

Creating a new theory that covers multiple cases typically requires new categories in the type hierarchy. To characterize the effects of analogies and metaphors, Way proposed dynamic type hierarchies, in which two or more analogous cases are generalized to a more abstract type T that subsumes all of them.

The new type T also subsumes other possibilities that may combine aspects of the original cases in novel arrangements. Sowa embedded the hierarchies in an infinite lattice of all possible theories. Some of the theories are highly specialized descriptions of just a single case, and others are very general. The most general theory at the top of the lattice contains only tautologies, which are true of everything. At the bottom is the contradictory or absurd theory, which is true of nothing.

The theories are related by four operators: These four operators define pathways through the lattice Figure 4which determine all possible ways of deriving new theories from old ones. Pathways through the lattice of theories To illustrate the moves through the lattice, suppose that A is Newton's theory of gravitation applied to the earth revolving around the sun and F is Niels Bohr's theory about an electron revolving around the nucleus of a hydrogen atom.

The path from A to F is a step-by-step transformation of the old theory to the new one. The contraction step from A to B deletes the axioms for gravitation, and the expansion step from B to C adds the axioms for the electrical force. The result of both moves is the equivalent of a revision step from A to C, which substitutes electrical axioms for gravitational axioms.

Unlike contraction and expansion, which move to nearby theories in the lattice, analogy jumps to a remote theory, such as C to E, by systematically renaming the types, relations, and individuals: Finally, the revision step from E to F uses a contraction step to discard details about the earth and sun that have become irrelevant, followed by an expansion step to add new axioms for quantum effects.

One revision step can replace two steps of contraction and expansion, but one analogy step can transfer a complete theory from one domain to another just by relabeling the symbols. Different methods of walking through the lattice can produce the same results as induction, deduction, abduction, learning, case-based reasoning, or nonmonotonic reasoning.

psychology and computer science relationship jokes

As a method of creative discovery, Fauconnier and Turner defined conceptual blending, which also corresponds to a walk through the lattice: Although all possible forms of reasoning, learning, and discovery can be reduced to walks in a lattice, the challenge is to find the correct path in an infinite lattice.

In AI, search methods can be guided by a heuristic function, which estimates the distance from any given point to any desired goal. The term optimality, which Fauconnier and Turner adopted, is used in linguistics and psychology for the constraints that characterize a desirable goal; any computable definition of such constraints could be programmed as a heuristic function. Levels of Cognition The methods of studying and modeling cognition differ from one branch of cognitive science to another, but all of them are abstractions from nature.

As Breiman cautioned, any conclusions derived from a model are "about the model's mechanism, and not about nature's mechanism. Figure 5 shows some models that have been postulated for animals at various levels of evolution. Evolution of cognition The cognitive systems of the animals at each level of Figure 5 build on and extend the capabilities of the earlier levels.

The worms at the top have rudimentary sensory and motor mechanisms connected by ganglia with a small number of neurons. A neural net that connects stimulus to response with just a few intermediate layers might be an adequate model. The fish brain is tiny compared to the mammals, but it already has a complex structure that receives inputs from highly differentiated sensory mechanisms and sends outputs to just as differentiated muscular mechanisms, which support both delicate control and high-speed propulsion.

Exactly how those mechanisms work is not known, but the neural evidence suggests a division into perceptual mechanisms for interpreting inputs and motor mechanisms for controlling action. There must also be a significant amount of interaction between perceptual and motor mechanisms, and a simple neural net is probably an inadequate model. At the next level, mammals have a cerebral cortex with distinct projection areas for each of the sensory and motor systems. If the fish brain is already capable of sophisticated interpretation and control, the larger cortex must add something more.

Figure 5 labels it analogy and symbolizes it by a cat playing with a ball of yarn that serves as a mouse analog. Whether the structure-mapping mechanisms of computer analogies can explain the rich range of mammalian behavior is not known, but whatever is happening must be at least as complex and probably much more so.

The human level is illustrated by a typical human, Sherlock Holmes, whose is famous for his skills at induction, abduction, and deduction. Those reasoning skills may be characterized as specialized ways of using analogies, but they work seamlessly with the more primitive abilities.

Computers that act like people, people that act like computers.

Psychology of Cyberspace - Humor

As our machines become more and more sophisticated - almost as sophisticated as their creators - we start to wonder whether there's much of a difference between the two. Does the human mind work like a computer?

Can computers become almost human? Interesting scientific and philosophical questions! These issues could lead to some rather maladaptive attitudes about human relationships that are parodied in jokes like this: Seeking technical support for Girlfriend: I'm currently running the latest version of Girlfriend 2.

psychology and computer science relationship jokes

I've been running the same version of DrinkingBuddies 1. I hear DrinkingBuddies won't crash if you run Girlfriend in background mode with the sound switched off. But I'm embarrassed to say that I can't find the button to turn it off. I just run them separately, and it works OK.

I probably should have stayed with Girlfriend 1. My friend also told me that Girlfriend 2. And after that, you have to upgrade to Wife 1.

On top of that, Wife 1. I told him to install Mistress 1. Anybody out there able to offer technical advice? Wanting to control women like they control their cars and computers. Wanting to understand women like they understand their cars and computers.

But failing on both scores. Not exactly an admirable portrayal of the male psyche! There is a strong tendency to perceive computers as if they are people, a phenomenon known as " transference. However, whether the computer acts more like a man or a woman is an issue open to debate.

Categorization in Cognitive Computer Science

In one joke about a "scientific poll" of attitudes concerning computers, the findings were divided: Women stated that computer should be referred to in the masculine gender because: In order to get their attention, you have to turn them on; 2. They have a lot of data, but are still clueless; 3. They are supposed to help you solve problems but half the time they are the problem; 4.

As soon as you commit to one, you realize that, if you had waited a little longer you could have had a better model. Men conclude that computers should be referred to in the feminine gender because: No one but the Creator understands their internal logic; 2. The native language they use to communicate with other computers is incomprehensible to everyone else; 3. Even your smallest mistakes are stored in long-term memory for later retrieval; 4.

As soon as you make a commitment to one, you find yourself spending half your paycheck on accessories for it. The ancient and never-ending battle of the sexes shines through once again! Cyberspace jokes - like any brand of humor - serve as a vehicle for expressing universal human issues. Other cyberspace bits, however, specialize in making fun of experiences that are unique to cyberspace - jokes that only experienced onliners will appreciate.

Anyone who has participated in an e-mail list discussion of some important change in the group will nod and chuckle when reading How many mail list subscribers does it take to change a light bulb? Or how about the freedom the internet offers in allowing everyone the opportunity to speak their mind? Is it too much freedom?

Perhaps we don't want every narcissistic, opinionated, loud-mouthed pundit and his brother bending our ears, as this bit of humor suggests: I intersperse obscenity with tedious banality. Addresses I have plenty of, both genuine and ghosted too, On all the countless newsgroups that my drivel is cross-posted to. They would have found it earlier, but it was hiding behind two other genes. Mathematician Mandelbrot coined the word fractal — a form of geometric repetition.

To get to the other… eh? I've heard it before though. I guess its origins are lost in the mists of time. This is a joke I was told a long time ago, probably as a high school student in India, trying to come to terms with the baffling ways of statistics. What I like about it is how it alerts you to the limitations of reductionist thinking but also makes you aware that we are unlikely to fall into such traps, even if we are not experts in the field. I think this is just part of the cultural soup, so to speak.

I don't remember hearing it myself until the mids, when computers started getting in the way of everyone's lives! Then he heard something he didn't recognise… a loud, revving buzz coming from the woods. He went in to find out what strange animal's offspring was making this noise, and discovered a pair of snakes wielding a chainsaw.

She kept the other as a control. David Spiegelhalterprofessor of statistics, University of Cambridge Chemistry Chemistry seems to have produced some laughs at Imperial College London.

He soon becomes familiar with the military habit of abbreviating everything. As his unit comes under sustained attack, he is asked to urgently inform his HQ. I think I heard this when I was a student in the early s.

This is my current favourite. It comes from my daughter, who is a year-old A-level science student. I can never remember that dang name. The cause of her sorrow Was para-dichloro- diphenyl-trichloroethane.

I first read this limerick in a science magazine when I was at school.