Is it possible to calculate the probability of a child having an IQ in a certain range based on their parents' IQ?

Is it possible to calculate the probability of a child having an IQ in a certain range based on their parents' IQ?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

For instance, if two parents have IQs of 160, can one calculate the probability that their child has an IQ over 160?

This is precisely the question Galton had in mind when he invented regressions. He tested it with heights. Given parents heights, their children height is actually more likely to be closer to the population mean. So high parents tend to have smaller children, and short parents taller children (hence the term "regression"). To answer your question, a children of parents with high IQs (160 is very high), is likely to have a higher IQ than the average population, but lower than the parents. This is even more true than for height as IQ is a lot less heritable. The precise likelihood can be calculated given some assumption.

Galton, F. (1886). Regression towards mediocrity in hereditary stature. The Journal of the Anthropological Institute of Great Britain and Ireland, 15, 246-263.

To find out more about the influence of genetics on intelligence:

This news release from the journal Nature explains why it is so difficult to identify genes associated with IQ: “'Smart genes' prove elusive” (September 8, 2014)

The Tech Museum of Innovation at Stanford University provides a Q&A about the influence of genes and environment on IQ.

The Cold Spring Harbor Laboratory offers an interactive tool called Genes to Cognition that provides information about many aspects of the genetics of neuroscience.

Introduction to the Normal Distribution (Bell Curve)

The normal distribution is a continuous probability distribution that is symmetrical on both sides of the mean, so the right side of the center is a mirror image of the left side.

The area under the normal distribution curve represents probability and the total area under the curve sums to one.

Most of the continuous data values in a normal distribution tend to cluster around the mean, and the further a value is from the mean, the less likely it is to occur. The tails are asymptotic, which means that they approach but never quite meet the horizon (i.e. x-axis).

For a perfectly normal distribution the mean, median and mode will be the same value, visually represented by the peak of the curve.

The normal distribution is often called the bell curve because the graph of its probability density looks like a bell. It is also known as called Gaussian distribution, after the German mathematician Carl Gauss who first described it.

What is the difference between a normal distribution and a standard normal distribution?

A normal distribution is determined by two parameters the mean and the variance. A normal distribution with a mean of 0 and a standard deviation of 1 is called a standard normal distribution.

Figure 1. A standard normal distribution (SND).

This is the distribution that is used to construct tables of the normal distribution.

Why is the normal distribution important?

The bell-shaped curve is a common feature of nature and psychology

The normal distribution is the most important probability distribution in statistics because many continuous data in nature and psychology displays this bell-shaped curve when compiled and graphed.

For example, if we randomly sampled 100 individuals we would expect to see a normal distribution frequency curve for many continuous variables, such as IQ, height, weight and blood pressure.

Parametric significance tests require a normal distribution of the samples' data points

The most powerful (parametric) statistical tests used by psychologists require data to be normally distributed. If the data does not resemble a bell curve researchers may have to use a less powerful type of statistical test, called non-parametric statistics.

Converting the raw scores of a normal distribution to z-scores

We can standardized the values (raw scores) of a normal distribution by converting them into z-scores.

This procedure allows researchers to determine the proportion of the values that fall within a specified number of standard deviations from the mean (i.e. calculate the empirical rule).

Probability and the normal curve: What is the empirical rule formula?

The empirical rule in statistics allows researchers to determine the proportion of values that fall within certain distances from the mean. The empirical rule is often referred to as the three-sigma rule or the 68-95-99.7 rule.

If the data values in a normal distribution are converted to standard score (z-score) in a standard normal distribution the empirical rule describes the percentage of the data that fall within specific numbers of standard deviations (σ) from the mean (μ) for bell-shaped curves.

The empirical rule allows researchers to calculate the probability of randomly obtaining a score from a normal distribution.

68% of data falls within the first standard deviation from the mean. This means there is a 68% probability of randomly selecting a score between -1 and +1 standard deviations from the mean.

95% of the values fall within two standard deviations from the mean. This means there is a 95% probability of randomly selecting a score between -2 and +2 standard deviations from the mean.

99.7% of data will fall within three standard deviations from the mean. This means there is a 99.7% probability of randomly selecting a score between -3 and +3 standard deviations from the mean.

How can I check if my data follows a normal distribution?

Statistical software (such as SPSS) can be used to check if your dataset is normally distributed by calculating the three measures of central tendency. If the mean, median and mode are very similar values there is a good chance that the data follows a bell-shaped distribution (SPSS command here).

It is also advisable to a frequency graph too, so you can check the visual shape of your data (If your chart is a histogram, you can add a distribution curve using SPSS: From the menus choose: Elements > Show Distribution Curve).

Normal distributions become more apparent (i.e. perfect) the finer the level of measurement and the larger the sample from a population.

You can also calculate coefficients which tell us about the size of the distribution tails in relation to the bump in the middle of the bell curve. For example, Kolmogorov Smirnov and Shapiro-Wilk tests can be calculated using SPSS.

These tests compare your data to a normal distribution and provide a p-value, which if significant (p < .05) indicates your data is different to a normal distribution (thus, on this occasion we do not want a significant result and need a p-value higher than 0.05).

How to reference this article:

How to reference this article:

McLeod, S. A. (2019, May 28). Introduction to the normal distribution (bell curve). Simply psychology:

This workis licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License.

Company Registration no: 10521846

IQ tests hurt kids, schools -- and don't measure intelligence

By Scott Barry Kaufman
Published July 7, 2013 3:30PM (EDT)


1991: As I settle into my seat in the back of the classroom, I can’t take my eyes off the perfect girl. She is the lead in every play, the soloist in every choir performance, and the winner of every writing award. Quite simply, she is the pride and joy of every teacher at the school. She also happens to be beautiful, and I am infatuated. I decide I’m going to talk to her after class. It’s sixth grade and I’m back in the public school system. A fresh start. A new, improved— and I hope, suaver—me.

“Is Scott Kaufman here?” the teacher asks. My trance is interrupted. Without hesitation I raise my hand. “Can you come sit up front please?” she requests. Confused, I pick up my backpack and move down, inching closer and closer to the perfect girl, who is sitting in the front row. As I get closer, my heart starts beating faster. Why am I being asked to move to the front? What if I have to sit next to her? What would I say? Walk smooth, Scott. Smooth. I start to slow down. I put on a big, confident smile. Finally I reach my destination. The desk right next to hers.

She is writing in her notebook. Probably composing the next great sonata. I try to look cool. I nod my head a lot. I think that’s a cool thing to do. The teacher seems impressed with my coolness, as she is smiling. She kneels down beside me and within earshot of the perfect girl, whispers, “Scott, your Mom requested that you sit at the front of the classroom since you have a serious learning disability. Thanks for changing seats.”

The room starts to spin. Did the perfect girl hear? She must have heard. Humiliated, I sink down in my chair. I no longer feel cool. I feel trapped. It seems that no matter what I want to achieve, I am imprisoned by my label.

As early as the nineteenth century in Europe, case reports of children with learning disabilities in reading, writing, and arithmetic cropped up. Here’s a description in 1896 from the physician W. Pringle Morgan of a 14-year-old named Percy F.: “I might add that the boy is bright and of average intelligence in conversation. . . . The schoolmaster who has taught him for some years says that he would be the smartest lad in school if the instruction were entirely oral.”

The history of learning disabilities is a tale of multiple conceptualizations, spanning several continents. In the United States, physician Samuel Orton studied children with reading disabilities who had at least average IQ scores. Orton conceptualized language and motor disabilities as brain dysfunction in spite of normal or even above average intelligence. He believed that to adequately diagnose learning disabilities, it was important to combine a variety of sources of information, including IQ test scores, achievement test scores, family histories, and school histories. For those who then warranted the learning disability diagnosis, Orton believed the proper intervention consisted of directly targeting the specific area of weakness and using the child’s “spared” abilities to help remediate the disability.

In Germany, the neurologist Kurt Goldstein studied the deficits of soldiers who sustained head injuries. His focus was on their deficits in visual perception and attention. Goldstein’s student Alfred Strauss took this approach and studied adolescents with learning difficulties.Along with educator Laura Lehtinen, they developed remediation techniques that involved providing students with a distraction-free environment and training perceptual deficits. They merely inferred brain damage, though. They didn’t actually peer inside the head.

The Goldstein-Strauss approach was widespread in the 1950s and 1960s. Thousands of children were identified as having “minimal brain dysfunction” by the use of a checklist, which included things such as academic difficulty, aggression, and “acting-out.” If a student exhibited 9 out of 37 possible symptoms, they received treatment, which typically meant they spent hours a day doing perceptual tasks such as connecting dots and learning how to distinguish between a foreground and background. Although a systematic review of 81 studies concluded that these techniques were useless, many public schools in the United States continued to rely on perceptual training to remediate learning difficulties.

In the 1950s and 1960s, a number of psychologists and speech and language specialists, including William Cruickshank, Helmer Myklebust, and

Doris Johnson, began focusing more on the specific cognitive processes relating to academic difficulties. Their focus was much more targeted on specific areas of academic weakness. But this hodgepodge of different approaches created much confusion in the schools, because children with distinctly different areas of academic weakness were lumped together, and no one knew what to call them. Children who were having difficulties learning in school were given a number of different labels, including “dyslexia,” “learning disorder,” “perceptual disorder,” and “minimal brain dysfunction.”

On Saturday April 6, 1963, parents and professionals met in Chicago to explore the “problems” of the perceptually handicapped child. All were struggling to integrate all of these various approaches. At this historic conference Samuel Kirk, professor of special education at the University of Illinois, coined the term “learning disabilities,” noting, “I have used the term ‘learning disabilities’ to describe a group of children who have disorders in the development of language, speech, reading, and association communication skills needed for social interaction. In this group, I do not include children who have sensory handicaps, such as blindness, because we have methods of managing and training the deaf and blind. I also excluded from this group children who have generalized mental retardation.

Professionals, educators, and parents rejoiced. Finally they had a single, unified label.

Kirk’s speech was highly influential on the first federal definition of learning disabilities: the 1969 “Children with Specific Learning Disabilities Act.” Their definition was essentially Kirk’s definition:

The term “specific learning disability” means a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, which may manifest itself in imperfect ability to listen, think, speak, read, write, spell, or do mathematical calculations. The term includes such conditions as perceptual handicaps, brain injury, minimal brain dysfunction, dyslexia, and developmental aphasia. The term does not include children who have learning disabilities, which are primarily the result of visual, hearing, or motor handicaps, or mental retardation, or emotional disturbance, or of environmental, cultural, or economic disadvantage.

Notice there’s no actual mention of “intelligence” in this definition. There’s the fuzzy term “basic psychological processes.” The core of the definition is that those with a specific learning disability (SLD) show “unexpected” low achievement in a specific academic area that cannot be explained by other factors. This definition of specific learning disability remains in place today, virtually unchanged from its 1969 formulation, so it’s important to understand its origins: It was literally a definition created by a committee.

But defining the term was only the first step. Educators needed to know how they should identify children with a specific learning disability. Beginning with the “Right to Education for All Handicapped Children Act” of 1975, the following guidelines were included for identification:

The child does not achieve commensurate with age and ability when provided with appropriate educational experiences.

The child has a severe discrepancy between levels of ability and achievement in one or more of seven areas that are specifically listed (basic reading skills, reading comprehension, mathematics calculation, mathematics reasoning, oral expression, listening comprehension, and written expression).

The first guideline was intended to make sure that low educational achievement was due to an intrinsic characteristic of the student, and not just a reflection of bad teaching. The second guideline was their attempt to measure “unexpected” low achievement. But they had a problem. There was no good way for educators to measure the “basic cognitive processes” mentioned in their definition. What were these mysterious processes? Theory-based IQ tests, grounded in neuropsychological processes, hadn’t yet arrived on the scene.

Their solution: use a “severe discrepancy” between IQ and achievement. This decision was largely based on the Isle of Wight studies conducted in the early 1970s. Michael Rutter and William Yule found tentative evidence that there are meaningful differences between two different groups of poor readers—those whose low reading was unexpected based on their IQ (“specific reading retardation”) and those whose low reading was “expected” based on their low IQ score (“general reading backwardness”). Rutter and Yule concluded their study with the following: “The next question clearly is: ‘do the two groups need different types of remedial help with their reading?’ No data are available on this point but the other findings suggest that the matter warrants investigation.”

But the U.S. government needed guidelines and couldn’t wait for more research. So they left their guidelines open-ended, leaving it up to each state to decide what constituted a “severe discrepancy” between IQ and achievement. Of course, states differed quite a bit, creating a situation in which parents who wanted to gain a specific learning disability diagnosis for their child could pack up and move to a state whose guidelines required a smaller discrepancy! States also disagreed on which IQ test should be used and whether a global IQ score or subscale should be used. As we’ll see, these aren’t trivial differences.

Thus was born one of the most unintelligent methods of identifying learning disabilities ever invented.

Despite the high reliability of IQ test scores across most of the lifespan, IQ testing is not an exact science. One of Binet’s key insights is that you can’t measure someone’s IQ—or any psychological trait, for that matter—to the same level of precision as you can measure a person’s height or weight. There are many reasons why a person’s test score can change from one testing session to the next. One major source of IQ fluctuation is measurement error. Sometimes a score can be seriously underestimated because the test taker zoned out or temporarily became distracted. For instance, perhaps just before one IQ testing session, the test taker had a traumatic breakup that affects his or her concentration. It’s also possible for a person’s IQ score to be artificially inflated, which can happen with lucky guessing or cheating. There are some cases on record of parents feeding their children the answers ahead of time.

But the source of measurement error isn’t always the test taker. There’s plenty of room for administration errors, such as two different test examiners scoring answers differently, or one examiner making a clerical mistake and accidentally omitting the third digit in a child’s IQ score. Just how prevalent are these errors? One study found that about 90 percent of examiners made at least one error, and two-thirds of the errors resulted in a different IQ score. Also, despite IQ test administrators reporting confidence in their scoring accuracy, average levels of agreement was only 42.1 percent. As Kevin McGrew notes, “This level of examiner error is alarming, particularly in the context of important decision-making.”

To account for measurement error, most modern IQ tests provide an examiner with a confidence interval—the range of IQ scores that are likely to contain a person’s “true” IQ. Of course, there is no such thing as a true IQ score. The only way we’d actually be able to find that out would be to give a person the same IQ test an infinite number of times. But it’s clearly not feasible to give the same person the same test even a handful of times, so most IQ test manuals provide a range of IQ scores, leaving it up to the examiner to choose his or her confidence levels.

A commonly chosen confidence interval is 68 percent. Suppose you are trying to predict what an 11-year old child’s IQ score will be at the age of 21 and you know that there’s a .70 correlation in the general population between IQ measured at age 11 and IQ measured at age 21 (this correlation is at the upper end of what is typically found). Based solely on that information, what range of IQ scores can you expect he will obtain on his twenty-first birthday?

It depends how confident you want to be. If you are only 68 percent confident, you can expect that the child’s true score is somewhere within 10 points of his 11-year-old score (in both directions—10 points higher or 10 points lower than his original IQ score). But that’s with only 68 percent confidence. As Alan Kaufman notes, “I wouldn’t cross a busy intersection if I had only a 65% to 70% probability of making it to the other side.”

For high-stakes decisions, test administrators have the option of increasing their confidence interval to 90 percent or even 95 percent. Of course, higher confidence comes at a cost: it widens the range of possible IQ scores. In the example of this 11-year-old boy, if you want to be 95 percent confident of what this child’s IQ score will be at age 21, you’d have to expect a range of 20 points in either direction.

Most contemporary IQ tests are a bit more reliable, but no test exists that is perfectly reliable. Even using the most reliable IQ tests available today, the expected spread is significant. Kevin McGrew reviewed IQ fluctuations among today’s most frequently administered IQ tests and estimated that the full range of expected IQ differences for most of the general population is 16 to 26 points.

The problem gets even worse when you realize that those who are most impacted by high-stakes decisions—those at the extreme low and high ends of the bell curve—are also the ones who are most likely to show the largest test score fluctuations. The technical term for this phenomenon is “regression to the mean.” Let’s say you learn a new game, such as Scrabble, and the first time you play you do really great (beginner’s luck). All else being equal, the next time you play you’ll probably perform closer to average. Same thing if you performed really poorly the first time. Chances are, you’ll perform better next time. This applies to any form of measurement. Sports rookies who have an amazing first year are rarely as hot the second year. Even the “linsanity” of Jeremy Lin cooled down. It’s a statistical fact that initial expectations, based on a single number, can’t be trusted.

But measurement error isn’t the only culprit in IQ test score fluctuations. There are lots of other reasons why a person’s IQ score might differ from one testing session to the next. One important (but often overlooked) cause is the format of the test. Different IQ tests measure a different mixture of cognitive abilities, and school psychologists often find different IQ scores if they administer more than one IQ test to the same person (even if the test manual says they are measuring the same skills).

Just how much can scores fluctuate from one IQ test battery to the next? During 2002–2003, as part of validation for their new IQ test, the KABC-II, Alan and Nadeen Kaufman looked at IQ test scores from a dozen children who were tested on three different contemporary IQ tests. Consider the IQ profiles of a representative sample of those children, aged 12–13. The first thing to note is that those exposed to greater opportunities for learning (higher SES, based on parents’ education) tended to score higher on IQ tests than those from lower-SES backgrounds. But even collapsing across SES, every single preadolescent had a different IQ score based on which test they took. The differences for the dozen children ranged from 1 to 22 points, with an average difference of 12 points. Leo earned IQs that ranged from 102 to 124. Brianna ranged from 105 to 125. In some districts, Brianna would have qualified as “gifted” based solely on her KABC-II score. But if the district looked at her WJ III score, she would be considered as having only average intelligence. Therefore, an ideal IQ score would be one that not only averages across multiple testing sessions but also uses multiple test batteries to get at what you are trying to measure and averages those scores as well.

But not all test score fluctuations are the result of measurement error or changes in test battery. Various school factors significantly influence IQ scores, such as quality of instruction, enriching classes and afterschool activities, entering school late, intermittent attendance, length of schooling, and summer vacations. All of these factors influence genuine brain maturation. An inconvenient truth for educators who employ rigid IQ cutoff scores when making important decisions is that the environment matters, and people really do grow at different rates.

There are also important personal factors that can affect IQ scores, such as changes in test anxiety and test motivation. In a recent analysis of multiple studies based on 2,008 participants, Angela Duckworth and her colleagues found that offering material incentives increased IQ scores substantially (the effects ranged from medium to large). The effect of incentives depended on the person’s IQ score, however. Offering external rewards boosted IQ scores much more among those with below-average IQs than among those with above-average IQ scores. These results don’t mean IQ scores are meaningless as indicators of cognitive ability. It’s pretty difficult to obtain a high IQ score based solely on passion. Indeed, IQ remained a significant predictor of life outcomes, even taking motivation into account, and IQ scores were a better predictor of academic achievement than test motivation.

Nevertheless, the study highlights the fact that motivation is an important contributor to IQ test performance. No test administrator knows all of the causes of a testee’s low IQ score. Test examiners are trained to look out for signs of low motivation and high test anxiety, but they don’t always take their qualitative observations into account when interpreting a child’s score.

If IQ is such a fallible measuring stick, can we really predict an individual’s future level of academic achievement from their current IQ score? Averaging over many students, we can. The most reliable IQ tests typically show correlations with academic achievement ranging from the mid-.60s to the mid-.70s. These correlations offer some of the most reliable predictions in all of psychology.

But even with correlations this high, about 40 to 60 percent of differences in academic outcomes are not related to IQ scores. There are numerous factors that contribute to academic achievement (many of which we will review throughout this book). These include specific cognitive abilities, other student characteristics such as motivation, persistence, self-control, mindset, self-regulation strategies, classroom practices, design and delivery of curriculum and instruction, school demographics, climate, politics and practices, home and community environments, and, indirectly, state and school district organization and governance.

In fact, the strength of the relationship between IQ and academic achievement depends heavily on just how you define “academic achievement.” Consider two recent studies conducted by Angela L. Duckworth, Patrick D. Quinn, and Eli Tsukayama on middle school students.They found that self-control predicted changes in report card grades better than IQ, whereas IQ predicted changes in standardized achievement test scores better than self-control. The teachers indicated that they factored in completion of homework assignments, class participation, effort, and attendance when determining report card grades. These results suggest that GPA highlights a broader range of key life skills than standardized test performance.

Even the very highest correlations between IQ and academic achievement leave plenty of room for error. Kevin McGrew found a correlation of .75 between IQ and standardized academic achievement test performance on a representative sample of the most recent edition of the Woodcock-Johnson tests of cognitive abilities and achievement. Based on this correlation— which is on the very high end of what is typically found—just how well could he predict the academic achievement of those with IQs within the 70–80 range (those often labeled “slow learners”)?

Consider a scatter plot of the relationship between IQ and academic achievement (averaged across tests of reading, math, and written language). Each little circle represents a real, live, breathing person. Even for individuals within that small range of IQ scores, expected achievement scores ranged quite a bit, from about 40 to 110. Half of the individuals within the 70–80 IQ range achieved at or below their expected achievement, but importantly, the other half scored at or above their predicted achievement. This finding has some pretty striking implications! Back in 1937, Cyril Burt made his famous pint of milk analogy: “Capacity must obviously limit content. It is impossible for a pint jug to hold more than a pint of milk, and it is equally impossible for a child’s educational attainment to rise higher than his educable capacity.”

McGrew concludes that the correct metaphor for the association between IQ and academic achievement is not that the jug can’t hold more milk, but that the “cup can flow over.” According to McGrew, “the carte blanche assumption that all students with disabilities should have an alternative set of educational standards and an assessment system is inconsistent with empirical data. . . . The current reality is that despite being one of the flagship developments in all of psychology, intelligence tests are fallible predictors of academic achievement.”

McGrew’s findings don’t apply only to those with IQs in the 70–80 range. No matter what IQ band you pull out from McGrew’s analysis, you’ll find the same thing. In fact, a law emerges. Using the most reliable IQ tests available today, McGrew notes that “for any given IQ test score, half of the students will obtain achievement scores at or below their IQ score. Conversely, and frequently not recognized, is that for any given IQ test score, half of the students will obtain achievement scores at or above their IQ score.” Clearly a child’s current discrepancy between IQ and achievement doesn’t necessarily indicate a learning disability.

But perhaps the biggest flaw in the severe discrepancy method is that it’s a fundamentally unintelligent method. It treats single IQ scores as the arbiter of truth, without looking at the person’s history and understanding the numbers in context. Responsible and intelligent use of IQ tests require us to consider the student’s overall pattern of strengths and weaknesses (not just on the IQ test but even more generally in terms of talents, and social and emotional functioning), life aspirations, developmental history, environmental circumstances, and opportunities to learn.

Robert Sternberg and Elena Grigorenko sum the situation up nicely: “The use of difference scores in diagnosing reading disabilities is analogous to the building of a house of cards. Millions of high-stake decisions are being made on the basis of a procedure that is flawed and greatly in need of modification.” You would think that every single school in the United States would have firmly placed the severe discrepancy method in the dustbin by now. Alas, this isn’t the case. A recent survey by Perry Zirkel and Lisa Thomas found that the severe discrepancy approach remains a viable approach in the vast majority of states in the United States, with the decision to use the method left up to the local districts. If you are one of those districts still relying on the severe discrepancy approach, you may want to seriously rethink your procedures for identifying learning disabilities.

Excerpted with permission from "Ungifted: Intelligence Redefined," by Scott Barry Kaufman. Available from Basic Books, a member of The Perseus Books Group. Copyright © 2013.

Scott Barry Kaufman

Scott Barry Kaufman is Scientific Director of The Imagination Institute in the Positive Psychology Center at the University of Pennsylvania.

Probability is the measure of the likelihood of an event occurring. It is quantified as a number between 0 and 1, with 1 signifying certainty, and 0 signifying that the event cannot occur. It follows that the higher the probability of an event, the more certain it is that the event will occur. In its most general case, probability can be defined numerically as the number of desired outcomes divided by the total number of outcomes. This is further affected by whether the events being studied are independent, mutually exclusive, or conditional, among other things. The calculator provided computes the probability that an event A or B does not occur, the probability A and/or B occur when they are not mutually exclusive, the probability that both event A and B occur, and the probability that either event A or event B occurs, but not both.

Complement of A and B

Given a probability A, denoted by P(A), it is simple to calculate the complement, or the probability that the event described by P(A) does not occur, P(A'). If for example P(A) = 0.65 represents the probability that Bob does not do his homework, his teacher Sally can predict the probability that Bob does his homework as follows:

Given this scenario, there is therefore a 35% chance that Bob does his homework. Any P(B') would be calculated in the same manner, and it is worth noting that in the calculator above, can be independent i.e. if P(A) = 0.65, P(B) does not necessarily have to equal 0.35, and can equal 0.30 or some other number.

Intersection of A and B

The intersection of events A and B, written as P(A ∩ B) or P(A AND B) is the joint probability of at least two events, shown below in a Venn diagram. In the case where A and B are mutually exclusive events, P(A ∩ B) = 0. Consider the probability of rolling a 4 and 6 on a single roll of a die it is not possible. These events would therefore be considered mutually exclusive. Computing P(A ∩ B) is simple if the events are independent. In this case, the probabilities of event A and B are multiplied. To find the probability that two separate rolls of a die result in 6 each time:

The calculator provided considers the case where the probabilities are independent. Calculating the probability is slightly more involved when the events are dependent, and involves an understanding of conditional probability, or the probability of event A given that event B has occurred, P(A|B). Take the example of a bag of 10 marbles, 7 of which are black, and 3 of which are blue. Calculate the probability of drawing a black marble if a blue marble has been withdrawn without replacement (the blue marble is removed from the bag, reducing the total number of marbles in the bag):

Probability of drawing a blue marble:

Probability of drawing a black marble:

Probability of drawing a black marble given that a blue marble was drawn:

As can be seen, the probability that a black marble is drawn is affected by any previous event where a black or blue marble was drawn without replacement. Thus, if a person wanted to determine the probability of withdrawing a blue and then black marble from the bag:

Probability of drawing a blue and then black marble using the probabilities calculated above:

Union of A and B

In probability, the union of events, P(A U B), essentially involves the condition where any or all of the events being considered occur, shown in the Venn diagram below. Note that P(A U B) can also be written as P(A OR B). In this case, the "inclusive OR" is being used. This means that while at least one of the conditions within the union must hold true, all conditions can be simultaneously true. There are two cases for the union of events the events are either mutually exclusive, or the events are not mutually exclusive. In the case where the events are mutually exclusive, the calculation of the probability is simpler:

A basic example of mutually exclusive events would be the rolling of a dice where event A is the probability that an even number is rolled, and event B is the probability that an odd number is rolled. It is clear in this case that the events are mutually exclusive since a number cannot be both even and odd, so P(A U B) would be 3/6 + 3/6 = 1, since a standard dice only has odd and even numbers.

The calculator above computes the other case, where the events A and B are not mutually exclusive. In this case:

Using the example of rolling a dice again, find the probability that an even number or a number that is a multiple of 3 is rolled. Here the set is represented by the 6 values of the dice, written as:

S =
Probability of an even number:P(A) = <2,4,6>= 3/6
Probability of a multiple of 3:P(B) = <3,6>= 2/6
Intersection of A and B: P(A ∩ B) = <6>= 1/6
P(A U B) = 3/6 + 2/6 -1/6 = 2/3

Exclusive OR of A and B

Another possible scenario that the calculator above computes is P(A XOR B), shown in the Venn diagram below. The "Exclusive OR" operation is defined as the event that A or B occurs, but not simultaneously. The equation is as follows:

As an example, imagine it is Halloween, and two buckets of candy are set outside the house, one containing Snickers, and the other containing Reese's. Multiple flashing neon signs are placed around the buckets of candy insisting that each trick-or-treater only takes one Snickers OR Reese's but not both! It is unlikely however, that every child adheres to the flashing neon signs. Given a probability of Reese's being chosen as P(A) = 0.65, or Snickers being chosen with P(B) = 0.349, and a P(unlikely) = 0.001 that a child exercises restraint while considering the detriments of a potential future cavity, calculate the probability that Snickers or Reese's is chosen, but not both:

0.65 + 0.349 - 2 × 0.65 × 0.349 = 0.999 - 0.4537 = 0.5453

Therefore, there is a 54.53% chance that Snickers or Reese's is chosen, but not both.

Down Syndrome

Down syndrome (DS), also called Trisomy 21, is a condition in which a person is born with an extra chromosome. Chromosomes contain hundreds, or even thousands, of genes. Genes carry the information that determines your traits (features or characteristics passed on to you from your parents). With Down syndrome, the extra chromosome causes delays in the way a child develops, mentally and physically.

The physical features and medical problems associated with Down syndrome can vary widely from child to child. While some kids with DS need a lot of medical attention, others lead healthy lives.

Though Down syndrome can't be prevented, it can be detected before a child is born. The health problems that may go along with DS can be treated, and many resources are available to help kids and their families who are living with the condition.

What Causes It?

Normally, at the time of conception a baby inherits genetic information from its parents in the form of 46 chromosomes: 23 from the mother and 23 from the father. In most cases of Down syndrome, a child gets an extra chromosome 21 &mdash for a total of 47 chromosomes instead of 46. It's this extra genetic material that causes the physical features and developmental delays associated with DS.

Although no one knows for sure why DS happens and there's no way to prevent the chromosomal error that causes it, scientists do know that women age 35 and older have a significantly higher risk of having a child with the condition. At age 30, for example, a woman has about a 1 in 1,000 chance of conceiving a child with DS. Those odds increase to about 1 in 400 by age 35. By 40 the risk rises to about 1 in 100.

How Down Syndrome Affects Kids

Kids with Down syndrome tend to share certain physical features such as a flat facial profile, an upward slant to the eyes, small ears, and a protruding tongue.

Low muscle tone (called hypotonia) is also characteristic of children with DS, and babies in particular may seem especially "floppy." Though this can and often does improve over time, most children with DS typically reach developmental milestones &mdash like sitting up, crawling, and walking &mdash later than other kids.

At birth, kids with DS are usually of average size, but they tend to grow at a slower rate and remain smaller than their peers. For infants, low muscle tone may contribute to sucking and feeding problems, as well as constipation and other digestive issues. Toddlers and older kids may have delays in speech and self-care skills like feeding, dressing, and toilet teaching.

Down syndrome affects kids' ability to learn in different ways, but most have mild to moderate intellectual impairment. Kids with DS can and do learn, and are capable of developing skills throughout their lives. They simply reach goals at a different pace &mdash which is why it's important not to compare a child with DS against typically developing siblings or even other children with the condition.

Kids with DS have a wide range of abilities, and there's no way to tell at birth what they will be capable of as they grow up.

Medical Problems Associated With DS

While some kids with DS have no significant health problems, others may experience a host of medical issues that require extra care. For example, almost half of all children born with DS will have a congenital heart defect.

Kids with Down syndrome are also at an increased risk of developing pulmonary hypertension, a serious condition that can lead to irreversible damage to the lungs. All infants with Down syndrome should be evaluated by a pediatric cardiologist.

Approximately half of all kids with DS also have problems with hearing and vision. Hearing loss can be related to fluid buildup in the inner ear or to structural problems of the ear itself. Vision problems commonly include strabismus (cross-eyed), near- or farsightedness, and an increased risk of cataracts.

Regular evaluations by an otolaryngologist (ear, nose, and throat doctor), audiologist, and an ophthalmologist are necessary to detect and correct any problems before they affect language and learning skills.

Other medical conditions that may happen more frequently in kids with DS include thyroid problems, stomach and intestinal problems, seizure disorders, breathing problems, including sleep apnea and asthma, obesity, an increased chance of infections, and a higher risk of childhood leukemia. People with Down syndrome sometimes have an unstable upper spine and should be evaluated by a doctor before participating in physical activities. Fortunately, many of these conditions are treatable.

Prenatal Screening and Diagnosis

Two types of prenatal tests are used to detect Down syndrome in a fetus: screening tests and diagnostic tests. Screening tests estimate the risk that a fetus has DS diagnostic tests can tell whether the fetus actually has the condition.

Screening tests are cost-effective and easy to perform. But because they can't give a definitive answer as to whether a baby has DS, these tests are used to help parents decide whether to have more diagnostic tests.

Diagnostic tests are about 99% accurate in detecting Down syndrome and other chromosomal abnormalities. However, because they're performed inside the uterus, they are associated with a risk of miscarriage and other complications.

For this reason, invasive diagnostic testing previously was generally recommended only for women age 35 or older, those with a family history of genetic defects, or those who've had an abnormal result on a screening test.

However, the American College of Obstetrics and Gynecology (ACOG) now recommends that all pregnant women be offered screening with the option for invasive diagnostic testing for Down syndrome, regardless of age.

If you're unsure about which test, if any, is right for you, your doctor or a genetic counselor can help you sort through the pros and cons of each.

  • Nuchal translucency testing. This test, performed between 11 and 14 weeks of pregnancy, uses ultrasound to measure the clear space in the folds of tissue behind a developing baby's neck. (Babies with DS and other chromosomal abnormalities tend to accumulate fluid there, making the space appear larger.) This measurement, taken together with the mother's age and the baby's gestational age, can be used to calculate the odds that the baby has DS. Nuchal translucency testing is usually performed along with a maternal blood test.
  • The triple screen or quadruple screen (also called the multiple marker test). These tests measure the quantities of normal substances in the mother's blood. As the names imply, triple screen tests for three markers the quadruple screen includes one additional marker and is more accurate. These tests are typically offered between 15 and 18 weeks of pregnancy.
  • Integrated screen. This uses results from first-trimester screening tests (with or without nuchal translucency) and blood tests with a second trimester quadruple screen to come up with the most accurate screening results.
  • A genetic ultrasound. A detailed ultrasound is often performed at 18 to 20 weeks in conjunction with the blood tests, and it checks the fetus for some of the physical traits abnormalities associated with Down syndrome.
  • Cell free DNA. This test analyzes fetal DNA found in the mother&rsquos blood. It can be done in the 1st trimester and is more accurate at detecting Trisomy 21 than standard blood tests. Currently cell free DNA testing is only offered to women at high risk of having a baby with Down Syndrome.
  • Chorionic villus sampling (CVS). CVS involves taking a tiny sample of the placenta, either through the cervix or through a needle inserted in the abdomen. The advantage of this test is that it can be performed during the first trimester, typically between 10 and 12 weeks. The disadvantage is that it carries a slightly greater risk of miscarriage as compared with amniocentesis and has other complications.
  • Amniocentesis. This test, performed between 15 and 20 weeks of pregnancy, involves the removal of a small amount of amniotic fluid through a needle inserted in the abdomen. The cells can then be analyzed for the presence of chromosomal abnormalities. Amniocentesis carries a small risk of complications, such as preterm labor and miscarriage.
  • Percutaneous umbilical blood sampling (PUBS) or cordocentesis. Usually performed after 18 weeks, this test uses a needle to retrieve a small sample of blood from the umbilical cord. It carries risks similar to those associated with amniocentesis.

After a baby is born, if the doctor suspects DS based on the infant's physical characteristics, a karyotype &mdash a blood or tissue sample stained to show chromosomes grouped by size, number, and shape &mdash can be done to verify the diagnosis.

Resources That Can Help

If you're the parent of a child diagnosed with Down syndrome, you may at first feel overwhelmed by feelings of loss, guilt, and fear. Talking with other parents of kids with DS may help you deal with the initial shock and grief and find ways to look toward the future. Many parents find that learning as much as they can about DS helps ease some of their fears.

Experts recommend enrolling kids with Down syndrome in early-intervention services as soon as possible. Physical, occupational, and speech therapists and early-childhood educators can work with your child to encourage and accelerate development.

Many states provide free early-intervention services to kids with disabilities from birth to age 3, so check with your doctor or a social worker to learn what resources are available in your area.

Once your child is 3 years old, he or she is guaranteed educational services under the Individuals with Disabilities Education Act (IDEA). Under IDEA, local school districts must provide "a free appropriate education in the least restrictive environment" and an individualized education program (IEP) for each child.

Where to send your child to school can be a difficult decision. Some kids with Down syndrome have needs that are best met in a specialized program, while many others do well attending neighborhood schools alongside peers who don't have DS. Studies have shown that this type of situation, known as inclusion, is beneficial for both the child with DS as well as the other kids.

Your school district's child study team can work with you to determine what's best for your child, but remember, any decisions can and should involve your input, as you are your child's best advocate.

Today, many kids with Down syndrome go to school and enjoy many of the same activities as other kids their age. A few go on to college. Many transition to semi-independent living. Still others continue to live at home but are able to hold jobs, thus finding their own success in the community.

How to Calculate Your Mental Age

Your chronological age, that is, the number of years since the day of your birth, does not always align with what psychologists term your “mental age.” Determining the difference between the two is useful in designing lesson plans for the especially intelligent and mentally disabled, though the methods by which psychologists calculate mental age are limited. Standardized tests like the Stanford-Binet intelligence quotient have proven reliable, while doctor assessments in cases of severe disability are little more than educated guesses. Still, doctor assessments demonstrated reliability equal to that of testing in a 1986 study published in the "Journal of Pediatric Psychology."

Take an IQ test administered by a qualified, professional proctor. Online tests can be fun, but they lack expert analysis and come with warnings that they are for entertainment purposes only.

Write down your score in the following formula: IQ=MA/CA * 100, where “MA” is your mental age and “CA” is your chronological age. For example, if your chronological age is 10 and your IQ score is 120, the formula would read, 120=x/10 * 100, where “x” is your mental age.

Divide both sides of the equation by 100. This leaves you with 1.2=x/10 because dividing the right side of the equation cancels out the multiplier of 100, and dividing the left side of the equation, or 120/100, equals 1.2.

Multiply both sides of the equation by 10 to solve for "x," or your mental age. Since "x" is currently the numerator in the fraction x/10, multiplying by 10, or 10/1 to be specific, yields 10x/10, or simply, "x." Multiplying the left side of the equation equals 12, or 1.2 * 10. Therefore, x=12. So in this example, your mental age is 12.


Cognitive development slows at 16, then stops at 18, making the calculation of mental age irrelevant. IQ tests for adults use a "standard deviation" formula measured against a mean score of 100. No mental age calculation is used in the IQ scores of adults.

How to Test IQ?

So, what is an IQ test? An IQ test, which is also called an intelligence test among people, aims to express the intelligence potential of an individual numerically.

The unknown value in the formula of intelligence age/actual age x100 is the age of intelligence. The intelligence test determines this unknown value to complete the formula and sets a general IQ score regardless of age.

The purpose of these tests is to guide people to increase their IQ scores by shaping their education according to IQ (intelligence).

While we have you.

. we need your help. Confronting the many challenges of COVID-19&mdashfrom the medical to the economic, the social to the political&mdashdemands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we&rsquove organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review&rsquos responsibility to public reason is sure. That&rsquos why you&rsquoll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.

Juvenile Crime, Juvenile Justice (2001)

Research over the past few decades on normal child development and on development of delinquent behavior has shown that individual, social, and community conditions as well as their interactions influence behavior. There is general agreement that behavior, including antisocial and delinquent behavior, is the result of a complex interplay of individual biological and genetic factors and environmental factors, starting during fetal development and continuing throughout life (Bock and Goode, 1996). Clearly, genes affect biological development, but there is no biological development without environmental input. Thus, both biology and environment influence behavior.

Many children reach adulthood without involvement in serious delinquent behavior, even in the face of multiple risks. Although risk factors may help identify which children are most in need of preventive interventions, they cannot identify which particular children will become serious or chronic offenders. It has long been known that most adult criminals were involved in delinquent behavior as children and adolescents most delinquent children and adolescents, however, do not grow up to be adult criminals (Robins, 1978). Similarly, most serious, chronically delinquent children and adolescents experience a number of risk factors at various levels, but most children and adolescents with risk factors do not become serious, chronic delinquents. Furthermore, any individual factor contributes only a small part to the increase in risk. It is, however, widely recognized that the more risk factors a child or adolescent experiences, the higher their risk for delinquent behavior.

A difficulty with the literature on risk factors is the diversity of the outcome behaviors studied. Some studies focus on behavior that meets diagnostic criteria for conduct disorder or other antisocial behavior disorders others look at aggressive behavior, or lying, or shoplifting still others rely on juvenile court referral or arrest as the outcome of interest. Furthermore, different risk factors and different outcomes may be more salient at some stages of child and adolescent development than at others.

Much of the literature that has examined risk factors for delinquency is based on longitudinal studies, primarily of white males. Some of the samples were specifically chosen from high-risk environments. Care must be taken in generalizing this literature to girls and minorities and to general populations. Nevertheless, over the past 20 years, much has been learned about risks for antisocial and delinquent behavior.

This chapter is not meant to be a comprehensive overview of all the literature on risk factors. Rather it focuses on factors that are most relevant to prevention efforts. (For reviews of risk factor literature, see, for example, Hawkins et al., 1998 Lipsey and Derzon, 1998 Rutter et al., 1998.) The chapter discusses risk factors for offending, beginning with risks at the individual level, including biological, psychological, behavioral, and cognitive factors. Social-level risk factors are discussed next these include family and peer relationships. Finally, community-level risk factors, including school and neighborhood attributes, are examined. Although individual, social, and community-level factors interact, each level is discussed separately for clarity.


A large number of individual factors and characteristics has been associated with the development of juvenile delinquency. These individual factors include age, gender, complications during pregnancy and delivery, impulsivity, aggressiveness, and substance use. Some factors operate before birth (prenatal) or close to, during, and shortly after birth (perinatal) some can be identified in early childhood and other factors may not be evident until late childhood or during adolescence. To fully appreciate the development of these individual characteristics and their relations to delinquency, one needs to study the development of the individual in interaction with the environment. In order to simplify presentation of the research, however, this section deals only with individual factors.


Studies of criminal activity by age consistently find that rates of offending begin to rise in preadolescence or early adolescence, reach a peak in

late adolescence, and fall through young adulthood (see, e.g., Farrington, 1986a National Research Council, 1986). Some lawbreaking experience at some time during adolescence is nearly universal in American children, although much of this behavior is reasonably mild and temporary. Although the exact age of onset, peak, and age of desistance varies by offense, the general pattern has been remarkably consistent over time, in different countries, and for official and self-reported data. For example, Farrington (1983, 1986a), in a longitudinal study of a sample of boys in London (the Cambridge Longitudinal Study), found an eightfold increase in the number of different boys convicted of delinquent behavior from age 10 to age 17, followed by a decrease to a quarter of the maximum level by age 24. The number of self-reported offenses in the same sample also peaked between ages 15 and 18, then dropped sharply by age 24. In a longitudinal study of boys in inner-city Pittsburgh (just over half the sample was black and just under half was white), the percentage of boys who self-reported serious delinquent behavior rose from 5 percent at age 6 to about 18 percent for whites and 27 percent for blacks at age 16 (Loeber et al., 1998). A longitudinal study of a representative sample from high-risk neighborhoods in Denver also found a growth in the self-reported prevalence of serious violence from age 10 through late adolescence (Kelley et al., 1997). Females in the Denver sample exhibited a peak in serious violence in midadolescence, but prevalence continued to increase through age 19 for the boys. The study is continuing to follow these boys to see if their prevalence drops in early adulthood. Laub et al. (1998), using the Gluecks' data on 500 juvenile offenders from the 1940s, found that only 25 percent of them were still offending by age 32.

Much research has concentrated on the onset of delinquency, examining risk factors for onset, and differences between those who begin offending early (prior to adolescence) versus those who begin offending in midadolescence. There have been suggestions that early-onset delinquents are more likely than later-onset delinquents to be more serious and persistent offenders (e.g., Moffitt, 1993). There is evidence, however, that predictors associated with onset do not predict persistence particularly well (Farrington and Hawkins, 1991). There are also important problems with the choice of statistical models to create categories of developmental trajectories (Nagin and Tremblay, 1999).

Research by Nagin and Tremblay (1999) found no evidence of late-onset physical aggression. Physical aggression was highest at age 6 (the earliest age for which data were collected for this study) and declined into adolescence. The available data on very young children indicates that frequency of physical aggression reaches a peak around age 2 and then slowly declines up to adolescence (Restoin et al., 1985 Tremblay et al., 1996a).

Those who persist in offending into adulthood may differ from those who desist in a number of ways, including attachment to school, military service (Elder, 1986 Sampson and Laub, 1996), sex, age of onset of offending, incarceration, and adult social bonds (e.g., marriage, quality of marriage, job stability) (Farrington and West, 1995 Quinton et al., 1993 Quinton and Rutter, 1988 Sampson and Laub, 1990). Sampson and Laub (1993) found that marital attachment and job stability significantly reduced deviant behavior in adulthood. Farrington and West (1995) found that offenders and nonoffenders were equally likely to get married, but those who got married and lived with their spouse decreased their offending more than those who remained single or who did not live with their spouse. They also found that offending increased after separation from a spouse. Similarly, Horney et al. (1995) found that married male offenders decreased their offending when living with their spouses and resumed it when not living with them. Within marriages, only good marriages predicted reduction in crime, and these had an increasing effect over time (Laub et al., 1998). Warr (1998) also found that offending decreased after marriage but attributed the decrease to a reduction in the time spent with peers and a reduction in the number of deviant peers following marriage rather than to increased attachment to conventional society through marriage.

Laub et al. (1998) found no difference between persisters and desisters in most family characteristics during childhood (e.g., poverty, parental alcohol abuse or crime, discipline, supervision) or in most individual differences in childhood (e.g., aggression, tantrums, difficult child, verbal IQ). Brannigan (1997) points out that crime is highest when males have the fewest resources, and it lasts longest in those with the fewest investments in society (job, wife, children). Crime is not an effective strategy for getting resources. There is evidence that chronic offenders gain fewer resources than nonoffenders, after the adolescent period (Moffitt, 1993).

The evidence for desistance in girls is not clear. One review of the literature suggests that 25 to 50 percent of antisocial girls commit crimes as adults (Pajer, 1998). There is also some evidence that women are less likely to be recidivists, and that they end their criminal careers earlier than men (Kelley et al., 1997). However, the sexes appear to become more similar with time in rates of all but violent crimes. There is a suggestion that women who persist in crime past adolescence may be more disturbed than men who persist (Jordan et al., 1996 Pajer, 1998).

Prenatal and Perinatal Factors

Several studies have found an association between prenatal and perinatal complications and later delinquent or criminal behavior (Kandel et

al., 1989 Kandel and Mednick, 1991 Raine et al., 1994). Prenatal and perinatal risk factors represent a host of latent and manifest conditions that influence subsequent development.

Many studies use the terms &ldquoprenatal or perinatal complications&rdquo to describe what is a very heterogeneous set of latent and clinical conditions. Under the heading of prenatal factors, one finds a broad variety of conditions that occurs before birth through the seventh month of gestation (Kopp and Krakow, 1983). Similarly, perinatal factors include conditions as varied as apnea of prematurity (poor breathing) to severe respiratory distress syndrome. The former condition is relatively benign, while the latter is often life-threatening. Although they are risk factors, low birthweight and premature birth do not necessarily presage problems in development.

Prenatal and perinatal risk factors may compromise the nervous system, creating vulnerabilities in the child that can lead to abnormal behavior. Children with prenatal and perinatal complications who live in impoverished, deviant, or abusive environments face added difficulties. According to three major large-scale, long-term studies: (1) developmental risks have additive negative effects on child outcomes, (2) most infants with perinatal complications develop into normally functioning children, and (3) children with long-term negative outcomes who suffered perinatal complications more often than not came from socially disadvantaged backgrounds (Brennan and Mednick, 1997 Broman et al., 1975 Drillien et al., 1980 Werner et al., 1971).

Mednick and colleagues (Brennan and Mednick, 1997 Kandel and Mednick, 1991 Raine et al., 1994) have conducted several investigations in an attempt to elucidate the relationship between criminal behavior and perinatal risk. These and other studies have been unable to identify specific mechanisms to account for the fact that the number of prenatal and perinatal abnormalities tend to correlate with the probability that a child will become a criminal. In addition to the lack of specificity regarding the predictors and the mechanisms of risk, similar measures predict learning disabilities, mental retardation, minimal brain dysfunction, and others (Towbin, 1978). An association between perinatal risk factors and violent offending is particularly strong among offenders whose parents are mentally ill or very poor (Raine et al., 1994, 1997).

Most measures indicate that males are more likely to commit crimes. They are also more vulnerable to prenatal and perinatal stress, as is shown through studies of negative outcomes, including death (Davis and Emory, 1995 Emory et al., 1996).

Hyperactivity, attention problems, and impulsiveness in children have been found to be associated with delinquency. These behaviors can be assessed very early in life and are associated with certain prenatal and perinatal histories (DiPietro et al., 1996 Emory and Noonan, 1984 Lester

et al., 1976 Sameroff and Chandler, 1975). For example, exposure to environmental toxins, such as prenatal lead exposure at very low levels, tends to adversely affect neonatal motor and attentional performance (Emory et al., 1999). Hyperactivity and aggression are associated with prenatal alcohol exposure (Brown et al., 1991 Institute of Medicine, 1996). Prenatal exposure to alcohol, cocaine, heroin, and nicotine appear to have similar effects. Each tends to be associated with hyperactivity, attention deficit, and impulsiveness (Karr-Morse and Wiley, 1997).

Individual Capabilities, Competencies, and Characteristics

In recent investigations, observable behaviors, such as duration of attention to a toy and compliance with mother's instructions not to touch an object, that are particularly relevant to later misbehavior are observable in the first year of life (Kochanska et al., 1998). However, the ability to predict behavior at later ages (in adolescence and adulthood) from such traits early in life is not yet known. Aggressive behavior is nevertheless one of the more stable dimensions, and significant stability may be seen from toddlerhood to adulthood (Tremblay, 2000).

The social behaviors that developmentalists study during childhood can be divided into two broad categories: prosocial and antisocial. Prosocial behaviors include helping, sharing, and cooperation, while antisocial behaviors include different forms of oppositional and aggressive behavior. The development of empathy, guilt feelings, social cognition, and moral reasoning are generally considered important emotional and cognitive correlates of social development.

Impulsivity and hyperactivity have both been associated with later antisocial behavior (Rutter et al., 1998). The social behavior characteristics that best predict delinquent behavior, however, are physical aggression and oppositionality (Lahey et al., 1999 Nagin and Tremblay, 1999). Most children start manifesting these behaviors between the end of the first and second years. The peak level in frequency of physical aggression is generally reached between 24 and 36 months, an age at which the consequences of the aggression are generally relatively minor (Goodenough, 1931 Sand, 1966 Tremblay et al., 1996a, 1999a). By entry into kindergarten, the majority of children have learned to use other means than physical aggression to get what they want and to solve conflicts. Those who have not learned, who are oppositional and show few prosocial behaviors toward peers, are at high risk of being rejected by their peers, of failing in school, and eventually of getting involved in serious delinquency (Farrington and Wikstrom, 1994 Huesmann et al., 1984 Miller and Eisenberg, 1988 Nagin and Tremblay, 1999 Tremblay et al., 1992a, 1994 White et al., 1990).

The differentiation of emotions and emotional regulation occurs during the 2-year period, from 12 months to 36 months, when the frequency of physical aggression increases sharply and then decreases almost as sharply (Tremblay, 2000 Tremblay et al., 1996a, 1999a). A number of longitudinal studies have shown that children who are behaviorally inhibited (shy, anxious) are less at risk of juvenile delinquency, while children who tend to be fearless, those who are impulsive, and those who have difficulty delaying gratification are more at risk of delinquent behavior (Blumstein et al., 1984 Ensminger et al., 1983 Kerr et al., 1997 Mischel et al., 1989 Tremblay et al., 1994).

A large number of studies report that delinquents have a lower verbal IQ compared with nondelinquents, as well as lower school achievement (Fergusson and Horwood, 1995 Maguin and Loeber, 1996 Moffitt, 1997). Antisocial youth also tend to show cognitive deficits in the areas of executive functions 1 (Moffitt et al., 1994 Seguin et al., 1995), perception of social cues, and problem-solving processing patterns (Dodge et al., 1997 Huesmann, 1988). The association between cognitive deficits and delinquency remains after controlling for social class and race (Moffitt, 1990 Lynam et al., 1993). Few studies, however, have assessed cognitive functioning during the preschool years or followed the children into adolescence to understand the long-term link between early cognitive deficits and juvenile delinquency. The studies that did look at children 's early cognitive development have shown that poor language performance by the second year after birth, poor fine motor skills by the third year, and low IQ by kindergarten were all associated with later antisocial behavior (Kopp and Krakow, 1983 Stattin and Klackenberg-Larsson, 1993 White et al., 1990). Stattin and Klackenberg-Larsson (1993) found that the association between poor early language performance and later criminal behavior remained significant even after controlling for socioeconomic status.

Epidemiological studies have found a correlation between language delay and aggressive behavior (Richman et al., 1982). Language delays may contribute to poor peer relations that, in turn, result in aggression (Campbell, 1990a). The long-term impact of cognitively oriented preschool programs on the reduction of antisocial behavior is a more direct indication that fostering early cognitive development can play an important role in the prevention of juvenile delinquency (Schweinhart et al., 1993 Schweinhart and Weikart, 1997). It is important to note that since poor cognitive abilities and problem behaviors in the preschool years also

Executive functions refer to a variety of independent skills that are necessary for purposeful, goal-directed activity. Executive functions require generating and maintaining appropriate mental representations, monitoring the flow of information, and modifying problem-solving strategies in order to keep behavior directed toward the goal.

lead to poor school performance, they probably explain a large part of the association observed during adolescence between school failure and delinquency (Fergusson and Horwood, 1995 Maguin and Loeber, 1996 Tremblay et al., 1992).

Several mental health disorders of childhood have been found to put children at risk for future delinquent behavior. Conduct disorder is often diagnosed when a child is troublesome and breaking rules or norms but not necessarily doing illegal behavior, especially at younger ages. This behavior may include lying, bullying, cruelty to animals, fighting, and truancy. Most adolescents in U.S. society at some time engage in illegal behaviors, whether some kind of theft, aggression, or status offense. Many adolescents, in the period during which they engage in these behaviors, are likely to meet formal criteria for conduct disorder. Behavior characterized by willful disobedience and defiance is considered a different disorder (oppositional defiant disorder), but often occurs in conjunction with conduct disorder and may precede it.

Several prospective longitudinal studies have found that children with attention and hyperactivity problems, such as attention deficit hyperactivity disorder, show high levels of antisocial and aggressive behavior (Campbell, 1990b Hechtman et al., 1984 Loney et al., 1982 Sanson et al., 1993 Satterfield et al., 1982). Early hyperactivity and attention problems without concurrent aggression, however, appear not to be related to later aggressive behavior (Loeber, 1988 Magnusson and Bergman, 1990 Nagin and Tremblay, 1999), although a few studies do report such relationships (Gittelman et al., 1985 Mannuzza et al., 1993, 1991).

Another disorder that is often associated with antisocial behavior and conduct disorder is major depressive disorder, particularly in girls (Kovacs, 1996 Offord et al., 1986 Renouf and Harter, 1990). It is hypothesized that depression during adolescence may be &ldquoa central pathway through which girls' serious antisocial behavior develops &rdquo (Obeidallah and Earls, 1999:1). In girls, conduct disorder may be a kind of manifestation of the hopelessness, frustration, and low self-esteem that often characterizes major depression.

For juveniles as well as adults, the use of drugs and alcohol is common among offenders. In 1998, about half of juvenile arrestees in the Arrestee Drug Abuse Monitoring Program tested positive for at least one drug. In these same cities, 2 about two-thirds of adult arrestees tested

This program collects information on both juvenile and adult arrestees in Birmingham, Alabama Cleveland, Ohio Denver, Colorado Indianapolis, Indiana Los Angeles, California Phoenix, Arizona Portland, Oregon St. Louis, Missouri San Antonio, Texas San Diego, California San Jose, California Tuscon, Arizona and Washington, DC. Data on adults are collected in 35 cities altogether.

positive for at least one drug (National Institute of Justice, 1999). Of course, drug use is a criminal offense on its own, and for juveniles, alcohol use is also a status delinquent offense. A number of studies have consistently found that as the seriousness of offending goes up, so does the seriousness of drug use as measured both by frequency of use and type of drug (see Huizinga and Jakob-Chien, 1998). In the longitudinal studies of causes and correlates of delinquency in Denver, Pittsburgh, and Rochester (see Thornberry et al., 1995), serious offenders had a higher prevalence of drug and alcohol use than did minor offenders or nonoffenders. In addition, about three-quarters of drug users in each sample were also involved in serious delinquency (Huizinga and Jakob-Chien, 1998). Similarly, in the Denver Youth Survey, serious offenders had the highest prevalence and frequency of use of alcohol and marijuana of all youth in the study. Nevertheless, only about one-third of serious delinquents were problem drug users (Huizinga and Jakob-Chien, 1998).

Although there appears to be a relationship between alcohol and drug use and criminal delinquency, not all delinquents use alcohol or drugs, nor do all alcohol and drug users commit delinquent acts (other than the alcohol or drug use itself). Those who are both serious delinquents and serious drug users may be involved in a great deal of crime, however. Johnson et al. (1991) found that the small group (less than 5 percent of a national sample) who were both serious delinquents and serious drug users accounted for over half of all serious crimes. Neverthless, it would be premature to conclude that serious drug use causes serious crime (McCord, 2001).

Whatever characteristics individuals have, resulting personalities and behavior are influenced by the social environments in which they are raised. Characteristics of individuals always develop in social contexts.


Children's and adolescents' interactions and relationships with family and peers influence the development of antisocial behavior and delinquency. Family interactions are most important during early childhood, but they can have long-lasting effects. In early adolescence, relationships with peers take on greater importance. This section will first consider factors within the family that have been found to be associated with the development of delinquency and then consider peer influences on delinquent behavior. Note that issues concerning poverty and race are dealt with under the community factors section of this chapter. Chapter 7 deals specifically with issues concerning race.

This program collects information on both juvenile and adult arrestees in Birmingham, Alabama Cleveland, Ohio Denver, Colorado Indianapolis, Indiana Los Angeles, California Phoenix, Arizona Portland, Oregon St. Louis, Missouri San Antonio, Texas San Diego, California San Jose, California Tuscon, Arizona and Washington, DC. Data on adults are collected in 35 cities altogether.

Family Influences

In assigning responsibility for childrearing to parents, most Western cultures place a heavy charge on families. Such cultures assign parents the task of raising children to follow society's rules for acceptable behavior. It should be no surprise, therefore, when families have difficulties with the task laid on them, that the product often is juvenile delinquency (Kazdin, 1997). Family structure (who lives in a household) and family functioning (how the family members treat one another) are two general categories under which family effects on delinquency have been examined.

Family Structure

Before embarking on a review of the effects of family structure, it is important to raise the question of mechanisms (Rutter et al., 1998). It may not be the family structure itself that increases the risk of delinquency, but rather some other factor that explains why that structure is present. Alternatively, a certain family structure may increase the risk of delinquency, but only as one more stressor in a series it may be the number rather than specific nature of the stressors that is harmful.

Historically, one aspect of family structure that has received a great deal of attention as a risk factor for delinquency is growing up in a family that has experienced separation or divorce. 3 Although many studies have found an association between broken homes and delinquency (Farrington and Loeber, 1999 Rutter and Giller, 1983 Wells and Rankin, 1991 Wilson and Herrnstein, 1985), there is considerable debate about the meaning of the association. For example, longitudinal studies have found an increased level of conduct disorder and behavioral disturbance in children of divorcing parents before the divorce took place (Block et al., 1986 Cherlin et al., 1991). Capaldi and Patterson (1991) showed that disruptive parenting practices and antisocial personality of the parent(s) accounted for apparent effects of divorce and remarriage. Thus, it is likely that the increased risk of delinquency experienced among children of broken homes is related to the family conflict prior to the divorce or separation, rather than to family breakup itself (Rutter et al., 1998). In their longitudinal study of family disruption, Juby and Farrington (2001) found that boys who stayed with their mothers following disruption had delinquency rates that were almost identical to those reared in intact families.

Many discussions of family structure treat single-parent households and divorced families as the same. In this section, the literature on single-parents is reported separately from that on separated and divorced families because there may be considerable differences in the experiences of children born to single parents and those whose parents divorce.

Being born and raised in a single-parent family has also been associated with increased risk of delinquency and antisocial behavior. Research that takes into account the socioeconomic conditions of single-parent households and other risks, including disciplinary styles and problems in supervising and monitoring children, show that these other factors account for the differential outcomes in these families. The important role of socioeconomic conditions is shown by the absence of differences in delinquency between children in single-parent and two-parent homes within homogeneous socioeconomic classes (Austin, 1978). Careful analyses of juvenile court cases in the United States shows that economic conditions rather than family composition influenced children 's delinquency (Chilton and Markle, 1972). Statistical controls for the mothers' age and poverty have been found to remove effects attributed to single-parent families (Crockett et al., 1993). Furthermore, the significance of being born to a single mother has changed dramatically over the past 30 years. In 1970, 10.7 percent of all births in the United States were to unmarried women (U.S. Census Bureau, 1977). By 1997, births to unmarried women accounted for 32.4 percent of U.S. births (U.S. Census Bureau, 1999). As Rutter and colleagues (1998:185) noted about similar statistics in the United Kingdom: &ldquoIt cannot be assumed that the risks for antisocial behavior (from being born to a single parent) evident in studies of children born several decades ago will apply to the present generation of births. &rdquo Recent work seems to bear out this conclusion. Gorman-Smith and colleagues found no association between single parenthood and delinquency in a poor, urban U.S. community (Gorman-Smith et al., 1999).

Nevertheless, children in single-parent families are more likely to be exposed to other criminogenic influences, such as frequent changes in the resident father figure (Johnson, 1987 Stern et al., 1984). Single parents often find it hard to get assistance (Ensminger et al., 1983 Spicer and Hampe, 1975). If they must work to support themselves and their families, they are likely to have difficulty providing supervision for their children. Poor supervision is associated with the development of delinquency (Dornbusch et al., 1985 Glueck and Glueck, 1950 Hirschi, 1969 Jensen, 1972 Maccoby, 1958 McCord, 1979, 1982). Summarizing their work on race, family structure, and delinquency in white and black families, Matsueda and Heimer (1987:836) noted: &ldquoYet in both racial groups non-intact homes influence delinquency through a similar process&mdashby attenuating parental supervision, which in turn increases delinquent companions, prodelinquent definitions, and, ultimately, delinquent behavior.&rdquo It looks as if the effects of living with a single parent vary with the amount of supervision, as well as the emotional and economic resources that the parent is able to bring to the situation.

A number of studies have found that children born to teenage mothers

are more likely to be not only delinquent, but also chronic juvenile offenders (Farrington and Loeber, 1999 Furstenberg et al., 1987 Kolvin et al., 1990 Maynard, 1997 Nagin et al., 1997). An analysis of children born in 1974 and 1975 in Washington state found that being born to a mother under age 18 tripled the risk of being chronic offender. Males born to unmarried mothers under age 18 were 11 times more likely to become chronic juvenile offenders than were males born to married mothers over the age of 20 (Conseur et al., 1997).

What accounts for the increase in risk from having a young mother? Characteristics of women who become teenage parents appear to account for some of the risk. Longitudinal studies in both Britain and the United States have found that girls who exhibit antisocial behavior are at increased risk of teenage motherhood, of having impulsive liaisons with antisocial men, and of having parenting difficulties (Maughan and Lindelow, 1997 Quinton et al., 1993 Quinton and Rutter, 1988). In Grogger's analysis of data from the National Longitudinal Study of youth, both within-family comparisons and multivariate analysis showed that the characteristics and backgrounds of the women who became teenage mothers accounted for a large part of the risk of their offsprings' delinquency (Grogger, 1997), but the age at which the mother gave birth also contributed to the risk. A teenager who becomes pregnant is also more likely than older mothers to be poor, to be on welfare, to have curtailed her education, and to deliver a baby with low birthweight. Separately or together, these correlates of teenage parenthood have been found to increase risk for delinquency (Rutter et al., 1998). Nagin et al. (1997), in an analysis of data from the Cambridge Study in Delinquent Development, found that the risk of criminality was increased for children in large families born to women who began childbearing as a teenager. They concluded that &ldquothe onset of early childbearing is not a cause of children's subsequent problem behavior, but rather is a marker for a set of behaviors and social forces that give rise to adverse consequences for the life chances of children&rdquo (Nagin et al., 1997:423).

Children raised in families of four or more children have an increased risk of delinquency (Farrington and Loeber, 1999 Rutter and Giller, 1983). It has been suggested that large family size is associated with less adequate discipline and supervision of children, and that it is the parenting difficulties that account for much of the association with delinquency (Farrington and Loeber, 1999). Work by Offord (1982) points to the influence of delinquent siblings rather than to parenting qualities. Rowe and Farrington (1997), in an analysis of a London longitudinal study, found that there was a tendency for antisocial individuals to have large families. The effect of family size on delinquency was reduced when parents' criminality was taken into account.

Family Interaction

Even in intact, two-parent families, children may not receive the supervision, training, and advocacy needed to ensure a positive developmental course. A number of studies have found that poor parental management and disciplinary practices are associated with the development of delinquent behavior. Failure to set clear expectations for children 's behavior, inconsistent discipline, excessively severe or aggressive discipline, and poor monitoring and supervision of children predict later delinquency (Capaldi and Patterson, 1996 Farrington, 1989 Hawkins et al., 1995b McCord, 1979). As Patterson (1976, 1995) indicates through his research, parents who nag or use idle threats are likely to generate coercive systems in which children gain control through misbehaving. Several longitudinal studies investigating the effects of punishment on aggressive behavior have shown that physical punishments are more likely to result in defiance than compliance (McCord, 1997b Power and Chapieski, 1986 Strassberg et al., 1994). Perhaps the best grounds for believing that family interaction influences delinquency are programs that alter parental management techniques and thereby benefit siblings as well as reduce delinquent behavior by the child whose conduct brought the parents into the program (Arnold et al., 1975 Kazdin, 1997 Klein et al., 1977 Tremblay et al., 1995).

Consistent discipline, supervision, and affection help to create well-socialized adolescents (Austin, 1978 Bender, 1947 Bowlby, 1940 Glueck and Glueck, 1950 Goldfarb, 1945 Hirschi, 1969 Laub and Sampson, 1988 McCord, 1991 Sampson and Laub, 1993). Furthermore, reductions in delinquency between the ages of 15 and 17 years appear to be related to friendly interaction between teenagers and their parents, a situation that seems to promote school attachment and stronger family ties (Liska and Reed, 1985). In contrast, children who have suffered parental neglect have an increased risk of delinquency. Widom (1989) and McCord (1983) both found that children who had been neglected were as likely as those who had been physically abused to commit violent crimes later in life. In their review of many studies investigating relationships between socialization in families and juvenile delinquency, Loeber and Stouthamer-Loeber (1986) concluded that parental neglect had the largest impact.

Child abuse, as well as neglect, has been implicated in the development of delinquent behavior. In three quite different prospective studies from different parts of the country, childhood abuse and neglect have been found to increase a child's risk of delinquency (Maxfield and Widom, 1996 Smith and Thornberry, 1995 Widom, 1989 Zingraff et al., 1993). These studies examined children of different ages, cases of childhood abuse and neglect from different time periods, different definitions of

child abuse and neglect, and both official and self-reports of offending, but came to the same conclusions. The findings are true for girls as well as boys, and for black as well as for white children. In addition, abused and neglected children start offending earlier than children who are not abused or neglected, and they are more likely to become chronic offenders (Maxfield and Widom, 1996). Victims of childhood abuse and neglect are also at higher risk than other children of being arrested for a violent crime as a juvenile (Maxfield and Widom, 1996).

There are problems in carrying out scientific investigations of each of these components as predictors of juvenile delinquency. First, these behaviors are not empirically independent of one another. Parents who do not watch their young children consistently are less likely to prevent destructive or other unwanted behaviors and therefore more likely to punish. Parents who are themselves unclear about what they expect of their children are likely to be inconsistent and to be unclear in communications with their children. Parenting that involves few positive shared parent-child activities will often also involve less monitoring and more punishing. Parents who reject their children or who express hostility toward them are more likely to punish them. Parents who punish are more likely to punish too much (abuse).

Another problem is the lack of specificity of effects of problems in childrearing practices. In general, problems in each of these areas are likely to be associated with problems of a variety of types &mdashperformance and behavior in school, with peers, with authorities, and eventually with partners and offspring. There are also some children who appear to elicit punishing behavior from parents, and this may predate such parenting. Therefore, it is necessary to take account of children's behavior as a potential confounder of the relationship between early parenting and later child problems, because harsh parenting may be a response to a particular child's behavior (Tremblay, 1995). It is also possible that unnecessarily harsh punishment is more frequently and intensely used by parents who are themselves more aggressive and antisocial. Children of antisocial parents are at heightened risk for aggressive, antisocial, and delinquent behavior (e.g., McCord, 1991 Serbin et al., 1998).

Social Setting

Where a family lives affects the nature of opportunities that will be available to its members. In some communities, public transportation permits easy travel for those who do not own automobiles. Opportunities for employment and entertainment extend beyond the local boundaries. In other communities, street-corner gatherings open possibilities for illegal activities. Lack of socially acceptable opportunities leads to frustra-