Shortlisted for the British Psychological Society Book Award 2017 Shortlisted for the British Book Design and Production Awards 2016 Shortlisted for the Association of Learned & Professional Society Publishers Award for Innovation in Publishing 2016 An Adventure in Statistics: The Reality Enigma by best-selling author and award-winning teacher Andy Field offers a better way to learn statistics. It combines rock-solid statistics coverage with compelling visual story-telling to address the conceptual difficulties that students learning statistics for the first time often encounter in introductory courses - guiding students away from rote memorization and toward critical thinking and problem solving. Field masterfully weaves in a unique, action-packed story starring Zach, a character who thinks like a student, processing information, and the challenges of understanding it, in the same way a statistics novice would. Illustrated with stunning graphic novel-style art and featuring Socratic dialogue, the story captivates readers as it introduces them to concepts, eliminating potential statistics anxiety.  The book assumes no previous statistics knowledge nor does it require the use of data analysis software. It covers the material you would expect for an introductory level statistics course that Field’s other books (Discovering Statistics Using IBM SPSS Statistics and Discovering Statistics Using R) only touch on, but with a contemporary twist, laying down strong foundations for understanding classical and Bayesian approaches to data analysis.  In doing so, it provides an unrivalled launch pad to further study, research, and inquisitiveness about the real world, equipping students with the skills to succeed in their chosen degree and which they can go on to apply in the workplace. The Story and Main Characters The Reality Revolution In the City of Elpis, in the year 2100, there has been a reality revolution. Prior to the revolution, Elpis citizens were unable to see their flaws and limitations, believing themselves talented and special. This led to a self-absorbed society in which hard work and the collective good were undervalued and eroded. To combat this, Professor Milton Grey invented the reality prism, a hat that allowed its wearers to see themselves as they really were - flaws and all. Faced with the truth, Elpis citizens revolted and destroyed and banned all reality prisms. The Mysterious Disappearance Zach and Alice are born soon after all the prisms have been destroyed. Zach, a musician who doesn’t understand science, and Alice, a geneticist who is also a whiz at statistics, are in love. One night, after making a world-changing discovery, Alice suddenly disappears, leaving behind a song playing on a loop and a file with her research on it. Statistics to the Rescue! Sensing that she might be in danger, Zach follows the clues to find her, as he realizes that the key to discovering why Alice has vanished is in her research. Alas! He must learn statistics and apply what he learns in order to overcome a number of deadly challenges and find the love of his life. As Zach and his pocket watch, The Head, embark on their quest to find Alice, they meet Professor Milton Grey and Celia, battle zombies, cross a probability bridge, and encounter Jig:Saw, a mysterious corporation that might have something to do with Alice’s disappearance…
Les mer
Once again, bestselling author and award-winning teacher Andy Field hasn′t just broken the traditional textbook mould with his new novel/textbook, he has forged the only statistics book on the market with a terrifying probability bridge, zombies and a talking cat!
Les mer
Prologue: The Dying Stars 1 Why You Need Science: The Beginning and The End 1.1. Will you love me now? 1.2. How science works 1.2.1. The research process 1.2.2. Science as a life skill 1.3. Research methods 1.3.1. Correlational research methods 1.3.2. Experimental research methods 1.3.3. Practice, order and randomization 1.4. Why we need science 2 Reporting Research, Variables and Measurement: Breaking the Law 2.1. Writing up research 2.2. Maths and statistical notation 2.3. Variables and measurement 2.3.1. The conspiracy unfolds 2.3.2. Qualitative and quantitative data 2.3.3. Levels of measurement 2.3.4. Measurement error 2.3.5. Validity and reliability 3 Summarizing Data: She Loves Me Not? 3.1. Frequency distributions 3.1.1. Tabulated frequency distributions 3.1.2. Grouped frequency distributions 3.1.3. Graphical frequency distributions 3.1.4. Idealized distributions 3.1.5. Histograms for nominal and ordinal data 3.2. Throwing Shapes 4 Fitting Models (Central Tendency): Somewhere In The Middle 4.1. Statistical Models 4.1.1. From the dead 4.1.2. Why do we need statistical models? 4.1.3. Sample size 4.1.4. The one and only statistical model 4.2. Central Tendency 4.2.1. The mode 4.2.2. The median 4.2.3. The mean 4.3. The ′fit′ of the mean: variance 4.3.1. The fit of the mean 4.3.2. Estimating the fit of the mean from a sample 4.3.3. Outliers and variance 4..4. Dispersion 4.4.1. The standard deviation as an indication of dispersion 4.4.2. The range and interquartile range 5 Presenting Data: Aggressive Perfector 5.1. Types of graphs 5.2. Another perfect day 5.3. The art of presenting data 5.3.1. What makes a good graph? 5.3.2. Bar graphs 5.3.3. Line graphs 5.3.4. Boxplots (box-whisker diagrams) 5.3.5. Graphing relationships: the scatterplot 5.3.6. Pie charts 6 Z-Scores: The wolf is loose 6.1. Interpreting raw scores 6.2. Standardizing a score 6.3. Using z-scores to compare distributions 6.4. Using z-scores to compare scores 6.5. Z-scores for samples 7 Probability: The Bridge of Death 7.1. Probability 7.1.1. Classical probability 7.1.2. Empirical probability 7.2. Probability and frequency distributions 7.2.1. The discs of death 7.2.2. Probability density functions 7.2.3. Probability and the normal distribution 7.2.4. The probability of a score greater than x 7.2.5. The probability of a score less than x: The tunnels of death 7.2.6. The probability of a score between two values: The catapults of death 7.3. Conditional probability: Deathscotch Inferential Statistics: Going Beyond the Data 8.1. Estimating parameters 8.2. How well does a sample represent the population? 8.2.1. Sampling distributions 8.2.2. The standard error 8.2.3. The central limit theorem 8.3. Confidence Intervals 8.3.1. Calculating confidence intervals 8.3.2. Calculating other confidence intervals 8.3.3. Confidence intervals in small samples 8.4. Inferential statistics 9 Robust Estimation: Man Without Faith or Trust 9.1. Sources of bias 9.1.1. Extreme scores and non-normal distributions 9.1.2. The mixed normal distribution 9.2. A great mistake 9.3. Reducing bias 9.3.1. Transforming data 9.3.2. Trimming data 9.3.3. M-estimators 9.3.4. Winsorizing 9.3.5. The bootstrap 9.4. A final point about extreme scores 10 Hypothesis Testing: In Reality All is Void 10.1. Null hypothesis significance testing 10.1.1. Types of hypothesis 10.1.2. Fisher′s p-value 10.1.3. The principles of NHST 10.1.4. Test statistics 10.1.5. One- and two-tailed tests 10.1.6. Type I and Type II errors 10.1.7. Inflated error rates 10.1.8. Statistical power 10.1.9. Confidence intervals and statistical significance 10.1.10. Sample size and statistical significance 11 Modern Approaches to Theory Testing: A Careworn Heart 11.1. Problems with NHST 11.1.1. What can you conclude from a ′significance′ test? 11.1.2. All-or-nothing thinking 11.1.3. NHST is influenced by the intentions of the scientist 11.2. Effect sizes 11.2.1. Cohen′s d 11.2.2. Pearson′s correlation coefficient,r 11.2.3. The odds ratio 11.3. Meta-analysis 11.4. Bayesian approaches 11.4.1. Asking a different question 11.4.2. Bayes′ theorem revisited 11.4.3. Comparing hypothesis 11.4.4. Benefits of bayesian approaches 12 Assumptions: Starblind 12.1. Fitting models: bringing it all together 12.2. Assumptions 12.2.1. Additivity and linearity 12.2.2. Independent errors 12.2.3. Homoscedasticity/ homogeneity of variance 12.2.4. Normally distributed something or other 12.2.5. External variables 12.2.6. Variable types 12.2.7. Multicollinearity 12.2.8. Non-zero variance 12.3. Turning ever towards the sun 13 Relationships: A Stranger′s Grave 13.1. Finding relationships in categorical data 13.1.1. Pearson′s chi-square test 13.1.2. Assumptions 13.1.3. Fisher′s exact test 13.1.4. Yates′s correction 13.1.5. The likelihood ratio (G-test) 13.1.6. Standardized residuals 13.1.7. Calculating an effect size 13.1.8. Using a computer 13.1.9. Bayes factors for contingency tables 13.1.10. Summary 13.2. What evil lay dormant 13.3. Modelling relationships 13.3.1. Covariance 13.3.2. Pearson′s correlation coefficient 13.3.3. The significance of the correlation coefficient 13.3.4. Confidence intervals for r 13.3.5. Using a computer 13.3.6. Robust estimation of the correlation 13.3.7. Bayesian approaches to relationships between two variables 13.3.8. Correlation and causation 13.3.9. Calculating the effect size 13.4. Silent sorrow in empty boats 14 The General Linear Model: Red Fire Coming Out From His Gills 14.1. The linear model with one predictor 14.1.1. Estimating parameters 14.1.2. Interpreting regression coefficients 14.1.3. Standardized regression coefficients 14.1.4. The standard error of b 14.1.5. Confidence intervals for b 14.1.6. Test statistic for b 14.1.7. Assessing the goodness of fit 14.1.8. Fitting a linear model using a computer 14.1.9. When this fails 14.2. Bias in the linear model 14.3. A general procedure for fitting linear models 14.4. Models with several predictors 14.4.1. The expanded linear model 14.4.2. Methods for entering predictors 14.4.3. Estimating parameters 14.4.4. Using a computer to build more complex models 14.5. Robust regression 14.5.1. Bayes factors for linear models 15 Comparing Two Means: Rock or Bust 15.1. Testing differences between means: The rationale 15.2. Means and the linear model 15.2.1. Estimating the model parameters 15.2.2. How the model works 15.2.3. Testing the model parameters 15.2.4. The independent t-test on a computer 15.2.5. Assumptions of the model 15.3. Everything you believe is wrong 15.4. The paired-samples t-test 15.4.1. The paired-samples t-test on a computer 15.5. Alternative approaches 15.5.1. Effect sizes 15.5.2. Robust tests of two means 15.5.3. Bayes factors for comparing two means 16 Comparing Several Means: Faith in Others 16.1. General procedure for comparing means 16.2. Comparing several means with the linear model 16.2.1. Dummy coding 16.2.2. The F-ratio as a test of means 16.2.3. The total sum of squares (SSt) 16.2.4. The model sum of squares (SSm) 16.2.5. The residual sum of squares (SSr) 16.2.6. Partitioning variance 16.2.7. Mean squares 16.2.8. The F-ratio 16.2.9. Comparing several means using a computer 16.3. Contrast coding 16.3.1. Generating contrasts 16.3.2. Devising weights 16.3.3. Contrasts and the linear model 16.3.4. Post hoc procedures 16.3.5. Contrasts and post hoc tests using a computer 16.4. Storm of memories 16.5. Repeated-measures designs 16.5.1. The total sum of squares, SSt 16.5.2. The within-participant variance, SSw 16.5.3. The model sum of squares, SSm 16.5.4. The residual sum of squares, SSr 16.5.5. Mean squares and the F-ratio 16.5.6. Repeated-measures designs using a computer 16.6. Alternative approaches 16.6.1. Effect sizes 16.6.2. Robust tests of several means 16.6.3. Bayesian analysis of several means 16.7. The invisible man Factorial Designs 17.1. Factorial designs 17.2. General procedure and assumptions 17.3. Analysing factorial designs 17.3.1. Factorial designs and the linear model 17.3.2. The fit of the model 17.3.3. Factorial designs on a computer 17.4. From the pinnacle to the pit 17.5. Alternative approaches 17.5.1. Calculating effect sizes 17.5.2. Robust analysis of factorial designs 17.5.3. Bayes factors for factorial designs 17.6. Interpreting interaction effects Epilogue: The Genial Night: SI Momentum Requiris, Circumspice
Les mer
See what students are saying! "PERFECT FOR EVERYONE that finds stats difficult, pointless or boring, this book proves them wrong! The way the concepts are explained is so easy to grasp and it makes it even entertaining to learn! I wish I had discovered this sooner!"
Les mer

Produktdetaljer

ISBN
9781446210451
Publisert
2016-05-25
Utgiver
Vendor
SAGE Publications Ltd
Vekt
1430 gr
Høyde
246 mm
Bredde
189 mm
Aldersnivå
UF, 05
Språk
Product language
Engelsk
Format
Product format
Heftet
Antall sider
768

Forfatter

Biographical note

Andy Field is Professor of Quantitative Methods at the University of Sussex. He has published widely (100+ research papers, 29 book chapters, and 17 books in various editions) in the areas of child anxiety and psychological methods and statistics. His current research interests focus on barriers to learning mathematics and statistics. He is internationally known as a statistics educator. He has written several widely used statistics textbooks including Discovering Statistics Using IBM SPSS Statistics (winner of the 2007 British Psychological Society book award), Discovering Statistics Using R, and An Adventure in Statistics (shortlisted for the British Psychological Society book award, 2017; British Book Design and Production Awards, primary, secondary and tertiary education category, 2016; and the Association of Learned & Professional Society Publishers Award for innovation in publishing, 2016), which teaches statistics through a fictional narrative and uses graphic novel elements. He has also written the adventr and discovr packages for the statistics software R that teach statistics and R through interactive tutorials. His uncontrollable enthusiasm for teaching statistics to psychologists has led to teaching awards from the University of Sussex (2001, 2015, 2016, 2018, 2019), the British Psychological Society (2006) and a prestigious UK National Teaching fellowship (2010). He′s done the usual academic things: had grants, been on editorial boards, done lots of admin/service but he finds it tedious trying to remember this stuff. None of them matter anyway because in the unlikely event that you′ve ever heard of him it′ll be as the ′Stats book guy′. In his spare time, he plays the drums very noisily in a heavy metal band, and walks his cocker spaniel, both of which he finds therapeutic.