Weekly Colloquium

The Center for Research Methods and Data Analysis presents a weekly colloquium series, featuring speakers from KU and visitors from other institutions. All are welcome to attend.

Join our methods-l email list (see howto: Email Lists) to receive reminders & announcements.

Upcoming Colloquia


Setting Up Your Independent Consulting Practice

Dr. Steve Simon, PhD | Research Professor, Department of Biomedical & Health Informatics, UMKC
Friday, January 19, 2018 -
3:00pm to 4:00pm
Watson Library, Room 455

Abstract: If you wish to become an independent statistical consultant, you will find that the work is challenging, 
but also rewarding. In this talk, I will contrast working as an independent consultant to working within a large organization. 
I will then review the issues that you face with an independent consulting practice: business models, billing, contracts, taxes, 
and most importantly, how to find clients.



Weekly Colloquium

Recent Colloquia

Tips for Making Beamer Slides with LaTeX and R

Dr. Paul Johnson
Friday, December 1, 2017 -
3:00pm to 4:00pm
Watson Library, Room 455

Open discussion of issues related to preparation of slides. Documents in LyX as well as "R NoWeb" (Rmd) will be discussed.
Some complications in compiling documents will also be explained and solved.

 

Weekly Colloquium
Staff Meeting
R
Lyx

Penalized quantile regression

Benjamin S. Sherwood, KU School of Business
Friday, November 17, 2017 -
3:00pm to 4:00pm
Watson Library, Room 455
Abstract: Quantile regression is a method for estimating conditional quantiles. It is a more robust method than least squares and provides a more complete description of a conditional distribution. Penalized quantile regression shrinks estimators towards zero and the penalties I am interested allow for simultaneous estimation and variable selection. I will provide an introduction to quantile regression and then discuss penalized quantile regression with both the Lasso (L1 norm) penalty and non-convex penalties (SCAD and MCP). I'll include a brief tutorial into my R package, rqPen, with some discussion about quirks and challenges of penalized quantile regression. 
Weekly Colloquium

DDI: What the Data Documentation Initiative means for you

Larry Hoyle, Senior Scientist, KU Institute for Policy & Social Research
Friday, September 29, 2017 -
3:00pm to 4:00pm
Watson Library, Room 455

Abstract: The Data Documentation Initiative (https://www.ddialliance.org) is an international effort to develop standard methods for recording data and tracking changes in storage formats and coding. A new version of the DDI standard is nearing completion. This talk will discuss where the DDI project has been and where it is likely to go.

Weekly Colloquium

Keeping a Project Together

Paul Johnson, Director, CRMDA
Thursday, August 17, 2017 -
2:30pm to 3:30pm
Clark Instructional Classroom, 3rd Floor Watson

A discussion of project folders, data management, and the recoding process. The presentation is available as a PDF file, along with accompanying files in a zip package (link).  Please see the CRMDA guide page, https://crmda.ku.edu/guide-39-projects

 

Attachments: 
Weekly Colloquium
Dr. Paul Johnson, Director CRMDA
Friday, March 31, 2017 -
3:00am to 4:00am
Watson Library, Room 455

Making My LyX Template - Demonstration Workshop

This seminar for people who are unsure about whether they might like to use LaTeX or LyX (a GUI editor) as their Thesis/Dissertation Template. We encourage people who might like to see how this works to come and watch. For others that have LaTeX and LyX installed, please bring your computer and practice/learn more.

Prior to the seminar, materials needed to follow along can be found at: http://pj.freefaculty.org/guides/Computing-HOWTO/LatexAndLyx/LyX-article-template

The software is free and is a university-approved KU Thesis/Dissertation template. 

We urge participants to set up their personal computers with TeXLive (Linux, Windows) or MacTex (Macintosh) as well as the LyX editor in order to participate fully in the experience.  This should be done before the Friday workshop because the download & installation can take awhile. 

For people that don't have personal computers, we have 3 computers that can be checked out. Please let us know in advance if you need to use one of these; we will put your name on a list.

To enroll - send an email to crmda@ku.edu with the subject line of:  LyX - Workshop, Friday, March 31, 2017.  The enrollment limit will be 20.  If the number of enrolled participants is less than 5 persons by 10:10 on Friday, March 31, 2017, this seminar will be postponed until the Fall.

Weekly Colloquium

Comprehensive Archive (2010-present)

Tips for Making Beamer Slides with LaTeX and R

Dr. Paul Johnson
Friday, December 1, 2017 -
3:00pm to 4:00pm
Watson Library, Room 455

Open discussion of issues related to preparation of slides. Documents in LyX as well as "R NoWeb" (Rmd) will be discussed.
Some complications in compiling documents will also be explained and solved.

 

Weekly Colloquium
Staff Meeting
R
Lyx

Penalized quantile regression

Benjamin S. Sherwood, KU School of Business
Friday, November 17, 2017 -
3:00pm to 4:00pm
Watson Library, Room 455
Abstract: Quantile regression is a method for estimating conditional quantiles. It is a more robust method than least squares and provides a more complete description of a conditional distribution. Penalized quantile regression shrinks estimators towards zero and the penalties I am interested allow for simultaneous estimation and variable selection. I will provide an introduction to quantile regression and then discuss penalized quantile regression with both the Lasso (L1 norm) penalty and non-convex penalties (SCAD and MCP). I'll include a brief tutorial into my R package, rqPen, with some discussion about quirks and challenges of penalized quantile regression. 
Weekly Colloquium

DDI: What the Data Documentation Initiative means for you

Larry Hoyle, Senior Scientist, KU Institute for Policy & Social Research
Friday, September 29, 2017 -
3:00pm to 4:00pm
Watson Library, Room 455

Abstract: The Data Documentation Initiative (https://www.ddialliance.org) is an international effort to develop standard methods for recording data and tracking changes in storage formats and coding. A new version of the DDI standard is nearing completion. This talk will discuss where the DDI project has been and where it is likely to go.

Weekly Colloquium

Keeping a Project Together

Paul Johnson, Director, CRMDA
Thursday, August 17, 2017 -
2:30pm to 3:30pm
Clark Instructional Classroom, 3rd Floor Watson

A discussion of project folders, data management, and the recoding process. The presentation is available as a PDF file, along with accompanying files in a zip package (link).  Please see the CRMDA guide page, https://crmda.ku.edu/guide-39-projects

 

Attachments: 
Weekly Colloquium
Dr. Paul Johnson, Director CRMDA
Friday, March 31, 2017 -
3:00am to 4:00am
Watson Library, Room 455

Making My LyX Template - Demonstration Workshop

This seminar for people who are unsure about whether they might like to use LaTeX or LyX (a GUI editor) as their Thesis/Dissertation Template. We encourage people who might like to see how this works to come and watch. For others that have LaTeX and LyX installed, please bring your computer and practice/learn more.

Prior to the seminar, materials needed to follow along can be found at: http://pj.freefaculty.org/guides/Computing-HOWTO/LatexAndLyx/LyX-article-template

The software is free and is a university-approved KU Thesis/Dissertation template. 

We urge participants to set up their personal computers with TeXLive (Linux, Windows) or MacTex (Macintosh) as well as the LyX editor in order to participate fully in the experience.  This should be done before the Friday workshop because the download & installation can take awhile. 

For people that don't have personal computers, we have 3 computers that can be checked out. Please let us know in advance if you need to use one of these; we will put your name on a list.

To enroll - send an email to crmda@ku.edu with the subject line of:  LyX - Workshop, Friday, March 31, 2017.  The enrollment limit will be 20.  If the number of enrolled participants is less than 5 persons by 10:10 on Friday, March 31, 2017, this seminar will be postponed until the Fall.

Weekly Colloquium

A strategic analysis of empty container logistics in Southern California using GIS and Spatial Optimization

Ting Lei, Assistant Professor, Geography and Atmospheric Science
Friday, December 2, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

The ports of Los Angeles and Long Beach is the largest port complex in marine container traffic in the U.S. The large volume of container traffic brings not only opportunities, but also a host of issues to the local communities, ranging from pollutant emission and noise to congestion on the roads and at the terminal gates of the port complex. The objective of this study is two-fold. The first is to use Geographic Information Systems (GIS) to estimate the amount of transportation associated with moving containers in the Los Angeles basin. The second is to use network analysis and spatial optimization investigate operational measures for alleviating the issues caused by the heavy container traffic (especially the empty container movement) in the basin.

Weekly Colloquium
GIS

CANCELLED:Monte Carlo sampling error, III: Delta-method simulation, two alternatives, and further topics to explore

Adam Hafdahl
Friday, November 18, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

Due to unforseen personal reason, this talk will be rescheduled for the Spring Semester.

Monte Carlo simulation has been used extensively for decades to study statistical methods' performance under diverse conditions.  More recently, Monte Carlo techniques have begun spreading beyond quantitative methodologists to applied researchers via their wider role in planning studies (e.g., power analysis) as well as simulation estimates from computationally intensive data analyses (e.g., resampling, multiple imputation, Bayesian techniques).  Because Monte Carlo results are stochastic, their sampling error may be an important source of uncertainty, especially when resource constraints permit relatively few replications.  Building on two previous CRMDA talks, in this presentation I discuss ongoing investigations of strategies to quantify and control sampling error in Monte Carlo estimates of point-estimator properties (PEPs) such as (relative) bias, variance, and (root) mean squared error.  After briefly reviewing the focal problem and delta-method techniques for inference about PEPs via an asymptotic variance, I first report an updated simulation study of delta-method confidence intervals (CIs) for several common PEPs.  I then describe two alternatives to delta-method CIs -- random partitions of replications and nonparametric bootstrap variants -- and report preliminary simulation studies of their performance.  Finally, I sketch several concepts, procedures, and issues pertaining to three of many associated topics to explore further: transforming PEP estimators to facilitate inference, planning the number of Monte Carlo replications, and more complex estimands such as comparisons or contrasts of two or more PEPs.

Weekly Colloquium

Data.Table for Big Data, A Tutorial

Jeremy Burnison
Friday, November 11, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

This tutorial will introduce you to the use of the data.table package in R. The advantage of data table over the standard data frame format is speed in processing and simple intuitive syntax for aggregating and summarizing data. This makes data.table desirable for 'big data' analysis. While Data table is an alternative to data frame most of your data frame-centric code should work with a data.table but run faster! Included in this tutorial are examples of indexing and aggregating data variables using the data table syntax and it’s advantages. The first example dataset used is not large so that you can see what happens to the data with the syntax used. 

Weekly Colloquium
R
R
data.table

Table Making Software for R

Paul Johnson
Friday, October 21, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

CRMDA staff will demonstrate some tools for making regression tables that are ready for inclusion in documents. We will have a survey of tools for translating other kinds of tables into documents.

This is the culmination of a great deal of experimentation with the use of Rmarkdown for document preparation.  The following seems certain.  If one choses to prepare a document with Rmarkdown, one should prepare a document with the knowledge that it will be converted into PDF or HTML.

It is simply not possible to prepare a full-featured document that can easily walk back-and-forth between HTML and PDF output. The table-making strategies that work best with PDF are not compatible with HTML, and the converse is also likely to be true.

Unless that either/or choice dissolves itself, then our path going forward is clear.  Less formal documents, perhaps "guides" and "workhshop" material, can be prepared with HTML as the target.  More formal documents, such as articles or reports for clients, should be in the PDF production mode.

This presentation is free and open to the public.

Weekly Colloquium
R
R
latex
tables
markdown

Distinguishing Outcomes from Indicators via Bayesian Modeling

Dr. Roy Levy, ASU
Friday, September 23, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

A conceptual distinction is drawn between indicators, which serve to define latent variables, and outcomes, which do not. However, commonly used frequentist and Bayesian estimation procedures do not honor this distinction. They allow the outcomes to influence the latent variables and the measurement model parameters for the indicators, rendering the latent variables subject to interpretational confounding. Modified Bayesian procedures that preclude this are advanced, along with procedures for conducting diagnostic model-data fit analyses. These are studied in a simulation, where they outperform existing strategies, and illustrated with an example.

Weekly Colloquium

A Panel Quantile Approach to Attrition Bias in Big Data: Evidence from a Randomized Experiment

Dr. Carlos Lamarche
Friday, September 16, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

Authors: Matthew Harding (Duke University) and Carlos Lamarche (University of Kentucky)

Abstract: This paper introduces a quantile regression estimator for panel data models with individual heterogeneity and attrition. The method is motivated by the fact that attrition bias is often encountered in Big Data problems. For example, many users sign-up for the latest utility program but few remain active users several months later, making the evaluation of such interventions inherently very challenging. Building on earlier work by Hausman and Wise (1979), we provide a simple identification strategy that leads to a two-step estimation procedure. In the first step, the coefficients of interest in the selection equation are consistently estimated using parametric or nonparametric methods. In the second step, standard panel quantile methods are employed on a subset of weighted observations. The estimator is computationally easy to implement in Big Data applications with a large number of subjects. We investigate the conditions under which the parameter estimator is asymptotically Gaussian and we carry out a series of Monte Carlo simulations to investigate the finite sample properties of the estimator. We explore an application to the evaluation of a recent Time-of-Day electricity pricing experiment inspired by the work of Aigner and Hausman (1980).

 

Weekly Colloquium

Estimating and Testing Panel Quantile Regression Models

Dr. Carlos Lamarche
Friday, September 16, 2016 -
10:00am to 11:00am
Watson Library, Room 455

This presentation discusses newly developed estimators for panel quantile models. We first briefly introduce quantile regression and panel quantile regression. We then concentrate our attention on practical details about the estimation of few of the existing models in the literature. Several empirical applications are discussed. A tutorial on how to estimate a few of the existing models in the literature (after some introduction of course). I will of course include examples.

Attachments: 
Weekly Colloquium

Data Analysis Outside of Academia - How does a PhD translate?

Jared Harpole & Luke McCune
Friday, April 22, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

Luke McCune, a data scientist currently working at Commerce Bank in Kansas City, will be giving a talk concerning his choice and experiences in pursuing a career outside of academia. This talk will cover the job search and application process, the responsibilities and experiences Luke has had as an emerging data scientist, and the benefits (and limitations) of his education as they pertain to his current role. In regards to job hunting, particular focus will be placed on common interview and examination processes at area businesses, and on recommended job search resources (e.g., Glassdoor.com). Roles, responsibilities, and expectations will also be discussed, especially how they factor into future job responsibilities and opportunities for advancement. The discussion of job responsibilities will additionally touch on the matrix of taught vs. required skills, with notes given to concepts and techniques uncommon in academia but crucial within most industry positions (e.g., prediction accuracy, decision trees).

Jared Harpole is currently a data scientist at Pinsight Media in Kansas City. He has worked with data sets as small as 5 rows and as large as 45 billion rows and is currently building and deploying a recommendation engine for suggesting hundreds of thousands of mobile apps to tens of millions of users. Jared will discuss his journey to becoming a data scientist and give suggestions on what aspiring students may want to consider in order to successfully make the transition. Further, he will provide insight into what tools, techniques, and experiences students should consider when planning to pursue a career outside of academia. 

Questions and active discussion are strongly encouraged.

Weekly Colloquium

Testing for Slope Heterogeneity Bias in Panel Data Models

Ted Juhl
Friday, February 26, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

Standard panel data models typically make use of the assumption that each unit has the same slope.  We show how the failure of the homogeneity assumption induces bias in fixed effects estimators.  We propose a test for the existence of the bias, and propose solutions to circumvent the problem.   Monte Carlo experiments show that the new test has good size and power for a variety of sample sizes for both N and T.   The procedure is illustrated with an empirical example that explores the sensitivity of firm investment to cash flow and Q.

Weekly Colloquium

Strategies to Export Regression Tables from Stata

Jacob Fowles
Friday, February 19, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

A brief survey of methods for creating regression tables for inclusion in research reports.

Special feature: Paul Johnson will demonstrate some nice tables from R produced with the packages "rockchalk" and "texreg".

Weekly Colloquium

How to Cheat on your LaTeX Homework

Paul Johnson
Friday, February 5, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

This will be a demonstration, not a slide show.

http://pj.freefaculty.org/guides/Computing-HOWTO/LatexAndLyx/LyX-for_LaTeX_homework

If you are getting started with LaTeX, there is a relatively easy on-ramp called LyX. LyX is a graphical "word processor" style program that can create LaTeX documents. It can make it easier to create documents with sections, cross references, equations, tables, and bibliographies. LyX has an on-line "source view" mode so that as you create material in the graphical interface, but see immediately the LaTeX markup that would correspond.  Although I have been writing documents with various LaTeX systems since before the turn of the century, I still use LyX on a daily basis. If you like working in LyX, you can stay within its confines, or you can open LyX side-by-side with another LaTeX document framework and use LyX as a "sketchpad" to learn the right markup.

The KU Information Technology support team has installed LyX in the GIS/Data computer lab on the 4th Floor in Watson Library. Anybody with a KU ID can log in. That is the best way to try it.

LyX is a free program (http://www.lyx.org) that can be installed on your computers as well. Versions exist for Windows, Macintosh, or Linux computers.  The installation might be a bit challenging on MS Windows, but it does work (in the end).  We have done it many times.

Weekly Colloquium
Lyx

Where do Multivariate Normal Random Variables Come from?

Paul E. Johnson
Friday, January 29, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

Please find attached with this announcement the zip file that includes the pdf named "mvn-generator-1.pdf" and accompanying slides "mvn-generator-slides.pdf". 

This talk surveys some matrix notation and concepts that were unearthed in an effort to understand some malfunctions in a simulation of propensity score matching estimates that was reported in the book Propensity Score Analysis (http://pj.freefaculty.org/scraps/revisiting-PSM.pdf).

This will include definitions and terminology:

  1. How to create a random draw from MVN(mu, sigma).
  2. Orthogonal matrix
  3. Matrix square roots
  4. Matrix decompositions

Compare the Eigen decomposition, Cholesky decomposition, the QR decomposition, the SVD decomposition.

This will not be a "work out the math" presentation, it will be a "show some matrices and wave your hands" presentation. When the rubber meets the road in a computational framework, the action is in matrix decompositions.

In case you wonder "why would it be useful to know this" please consider this. Although stats books teach us that the OLS slope estimate is calculated as (X'X)^{-1}X'y, no reasonable stats program has actually done that calculation since the 1970s. The watchwords in numerical linear algebra are

1. Never form X'X unless you absolutely can't avoid it and
2. Never "solve" X'X by following the matrix algebra suggested in the textbooks.

Attachments: 
Weekly Colloquium
multivariate normal variable
matrix algebra
decompositions

Generalized Additive Mixed Models

Prof. R Harald Baayen, Department of Linguistics, University of Tübingen, Germany
Wednesday, January 20, 2016 -
3:00pm to 4:00pm
Watson Library, Room 455

Generalized additive mixed models (GAMMs) are an extension of the generalized linear mixed model that provides the analyst with a wide range of tools to model nonlinear functional dependencies in two or more dimensions (wiggly regression curves, wiggly regression surfaces and hypersurfaces).  

GAMMs, which are implemented in the mgcv package for R by Simon Wood, provide a substantial and non-trivial addition to the toolkit of experimental psychology and experimental linguistics.  One particularly important extension is the possibility to include random effect factor smooths.  In the context of the classic linear mixed-effects model, random intercepts combined with random slopes make it possible to calibrate regression lines to the levels of random effect factors (e.g., subjects).  The factor smooths in GAMMs provide a non-linear extension, enabling the modeling of nonlinear curves instead of straight lines.  GAMMs can be important for capturing nonlinear trends in time series data, ranging from the successive reaction times in a simple behavioral experiment to the subject-specific fluctuations in the amplitude of the electrophysiological response of the brain to items in an EEG experiment.

GAMMs are also crucial for the proper modeling of nonlinear interactions between numerical predictors.  The potential of GAMMs for the language sciences will be illustrated by means of examples from dialectometry, phonetics, and psycholinguistics.

Weekly Colloquium
R

Hurdles and Steps: Estimating Demand for Solar Photovoltaics

Dr. Tsvetan Tsvetanov, Economics
Friday, November 6, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455

This paper estimates demand for residential solar photovoltaic (PV) systems using a new approach to address three empirical challenges that often arise with count data:excess zeros, unobserved heterogeneity, and endogeneity of price. Our results imply a price elasticity of demand for solar PV systems of -1.76. Counterfactual policy simulations indicate that new installations in Connecticut in 2014 would have been 47 percent less than observed in the absence of state nancial incentives, with a cost of $135/tCO2 assuming solar displaces natural gas. Our Poisson hurdle model approach holds promise for modeling the demand for many new technologies.

Weekly Colloquium

Sampling error in Monte Carlo estimates of point estimators' properties: Selected applications and preliminary evaluation

Adam Hafdahl
Friday, October 30, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455

Monte Carlo simulation is a popular empirical technique for studying a statistical point estimator's bias, variance, mean squared error, or other properties.  Monte Carlo estimates of such properties are subject to sampling error, which is rarely addressed explicitly in the planning, conduct, or reporting of Monte Carlo studies.  Quantifying and controlling Monte Carlo sampling error can help allocate resources more efficiently in these simulation experiments (e.g., time, computing costs) as well as aid interpretations of and decisions based on their results (e.g., choices among competing estimators).  After reviewing briefly a delta-method approach to obtain a large-sample (co)variance (matrix) for Monte Carlo estimators of one or more selected properties, I describe and demonstrate several applications of it.  These include standard errors and confidence intervals for popular properties of one point estimator, the same for selected comparisons between two point estimators (e.g., relative efficiency, variance vs. expected variance estimate), and strategies for planning the number of independent replications.  I also present findings from preliminary investigations of the delta-method approach's performance for such applications, including properties of its confidence intervals.

 

Weekly Colloquium

Git it together

Dr. Paul Johnson
Friday, October 16, 2015 - 3:00pm
Watson Library, Room 455

There will be a demonstration of Gitlab, a web service we have implemented for management of projects (an enhancement of/competitor with GitHub). The "Git it Together" document does not have the Gitlab component in it, but it may eventually have that, if we keep pushing in that direction.

This tutorial is training for CRMDA staffers, but is open to any other faculty or students who are interested in using Git as a part of the research workflow.  I expect that Git, or something like it, will soon be considered a required element in the new push for reproducible research.

In order to "play along" with the demonstration, we urge attendees to bring laptop computers and to install Git on them.  Please see https://git-scm.com/book/en/v2/Getting-Started-Installing-Git. Here is the brief summary (which I wrote before I knew about that link).

1. Windows: Install the package from https://git-scm.com/downloads. The Windows download will be a file with a name like "Git-2.6.1-64-bit.exe"

2. Mac: There are options here.  Git is included in the Xcode. On a newer Mac OS, open a Terminal and run "git" and, if you don't have it yet, the OS will tell you to go get Xcode. That includes a (possiblyoutdated) version of Git, but you can get the latest and greatest for Mac either on the Git site https://git-scm.com/downloads or by installing the Homebrew framework and running "brew install git".

3: Linux: There are git packages for all major distributions.

Attachments: 
Weekly Colloquium

Climate Governance in the United States & Greenhouse Gas Emissions

Dr. Dorothy Daley
Friday, September 25, 2015 - 3:00pm
Watson Library, Room 455

This presentation will provide an overview of the broader research project, describe the creation and structure a multilevel secondary dataset, along with outlining the types of analytical approaches we expect to use to answer our research questions.  Some preliminary results from initial random effects multilevel modeling will also be presented.    

Climate change is often described as a significant global challenge best addressed with centralized national and international institutions, but in the US, the federal government’s role has been limited while many states and localities are leading in the development of climate mitigation and adaptation policies. This kind of polycentric governance—where hundreds of subnational US governments and thousands of businesses are actively addressing climate change—is surprising.  Social science research to date has largely focused on exploring the occurrence of climate mitigation or adaptation policy activity at just one level of government while ignoring the consequences of that policy activity—actual changes in environmental performance by Greenhouse Gas (GHG) emitters. This shortcoming stems, in part, from data restrictions; there was no facility-level GHG emission trend data until 2007 when Congress created the Greenhouse Gas Reporting Program (GHGRP).  We use this new data set to better understand what factors influence changes in facility-level GHG emissions.  

In particular, the research team uses an institutional policy analysis framework to examine three aspects of climate governance in the US. First, we describe the types of climate risk governance arrangements that have developed across and within the states. Second, we examine when and where these institutional arrangements spur Greenhouse Gas (GHG) emission reductions at facilities. Third, we analyze the institutional elements and interactions that are most effective in changing environmental performance.  

 

Weekly Colloquium

ACF Interactive Sessions

Paul E. Johnson
Friday, September 11, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455
This is the first in a sequence of instructional presentations about usage
of the Advanced Computing Facility Community Cluster. This presentation
will be about interactive sessions.  If you want to run R, Mplus, Stata,
SPSS, SAS, Matlab, or other programs on the high performance compute
nodes, this is the chance to learn how. There will be other
presentations in the future for the preparation and submission of batch
jobs on the ACF cluster.

Background:  As of July 7, 2015, the CRMDA is now a member of the
Advanced Computing Facility compute cluster. We no longer operate the
separate cluster named hpc.quant.ku.edu. The ACF cluster allows both
large-scale batch computing as well as interactive sessions.

We have been very busy preparing new instructions and streamlining the
cluster's interface so that it is workable and pleasant.  We realize
this work is not finished, but we would like to draw your attention to
our current offerings at:

http://crmda.ku.edu/computing

On Friday, September 11, the particular piece we will demonstrate is the
remote desktop experience offered by the NoMachine client.  The
documentation for that is

http://crmda.ku.edu/interactive-session.

In particular, we'll illustrate how to run an Mplus program.

http://crmda.ku.edu/acf-mplus
Attachments: 
Weekly Colloquium

Modeling Accelerometry Data

Amber Watts, PhD, Department of Clinical Psychology, KU
Friday, April 24, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455

Accelerometers are becoming a popular method for measuring physical activity and other human behaviors. The data produced by this method are vast and are sampled as frequently as once per second for days or weeks. Most researchers collapse tens of thousands of data points into a single number such as the average number of minutes per day spent in a particular type of activity, thus ignoring intraindividual variability and other potentially meaningful patterns. I will discuss my project ACCEL which measures physical activity in sedentary older adults with and without Alzheimer’s disease and discuss some of the complications and questions that arise from using this type of data collection method.

This event is free and open to the public.

Weekly Colloquium

Causal Inference, happiness and experiments in testing

Howard Wainer, Distinguished Research Scientist - National Board of Medical Examiners
Friday, March 13, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455

In this talk I will introduce Rubin’s Model for Causal Inference and show how it can guide us to better answers to important questions. Specifically we will look into issues of happiness, test speededness, correcting for unplanned interruptions, the effect of feedback to judges in standard setting, and the value of teaching to the test.

 

 

Weekly Colloquium

Tools for Selecting an Optimal Measurement Model Using Bayesian Confirmatory Factor Analysis: Parallel Computations on the CRMDA Cluster

Terrence Jorgensen, CRMDA & Dept. of Psychology, KU
Friday, February 27, 2015 - 3:00pm
Watson Library, Room 455

Establishing measurement equivalence is necessary for valid comparison of latent parameters across groups or occasions, so testing equivalence poses a frequent model-comparison problem for applied researchers.  With the increasing popularity of Bayesian options in SEM software, researchers will use the most readily available tool for model comparison in a Bayesian framework, in which chi-squared-difference tests and fit indices are unavailable: the deviance information criterion (DIC).  The more general Watanabe-Akaike information criterion (WAIC) has been proposed more recently, along with standard error estimates for WAIC, which are unavailable for DIC.  I investigate the sampling behavior of DIC and WAIC in the context of selecting an optimal measurement model in Bayesian CFA.  I assess the relative efficiency of WAIC compared to DIC, evaluate analytical WAIC SEs by calculating relative bias, and report how often WAIC and DIC indicate a preference for each invariance model.

 

 

 

Weekly Colloquium

Is There a Culture War? Conflicting Value Structures in American Public Opinion

William Jacoby, Professor of Political Science, Michigan State University
Friday, February 20, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455

Abstract: This paper examines the "culture war" hypothesis by focusing on American citizens' choices among a set of core values. A geometric model is developed to represent differences in the ways that individuals rank-order seven important values: freedom, equality, economic security, social order, morality, individualism, and patriotism. The model is fitted to data on value choices from the 2006 Cooperative Congressional Election Survey. The empirical results show that there is an enormous amount of heterogeneity among individual value choices; the model estimates contradict any notion that there is a consensus on fundamental principles within the mass public. Further, the differences break down along political lines, providing strong evidence that there is a culture war generating fundamental divisions within twenty-first century American society.

Sponsored by the Political Science Lecture Series and the University of Kansas Representation Initiative (KUREP)  (http://kurep.ku.edu/)

Professor Jacoby's Website: http://polisci.msu.edu/jacoby

Attachments: 
Weekly Colloquium

Visualizing and Reporting Regression Results Using Stata

Jacob Fowles, School of Public Affairs and Administration, KU
Friday, February 13, 2015 -
3:00pm to 5:00pm
Watson Library, Room 455

This seminar will provide novice to intermediate Stata users with insights on how to work more effectively, efficiently, and productively through an exploration of Stata's built-in programming language and extensive suite of user-written add-ons.  The bulk of the seminar will focus on how to do useful things with estimation results, including how to automate the tedious process of creating results tables that are appropriately formatted, labeled, and ready to insert into manuscripts; how to produce other useful tables (such as tables of descriptive or summary statistics); and how to use Stata's powerful post-estimation suite of commands and graphing engine to visualize your results.  A basic working knowledge of Stata and multiple regression is assumed.

Saturday Seminar
Weekly Colloquium
Stata
stata

Creating R Classes (S3): Regression Contexts or Similar

Paul Johnson, CRMDA
Friday, February 6, 2015 -
3:00pm to 4:00pm
Watson Library, Room 455

A discussion (with examples) about the R S3 class framework for object oriented programming.  Describes how to declare classes, interact with instances from the class.

Attached find a zip file including the browseable html output as well as the R markdown file that generated it. If you just want to see the slides, look here:

http://pj.freefaculty.org/scraps/R_classes-1.html

 

 

Attachments: 
Weekly Colloquium
R

Grant Proposal Preparations

Megan Todd
Friday, December 12, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Megan Todd, Grant Specialist in Pre-Award services at KU Office of Research, will discuss grant proposal preparation. 

Attachments: 
Weekly Colloquium

Testing Non-Nested Structural Equation Models

Ed Merkle, Assistant Professor of Psychology, University of Missouri
Friday, December 5, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455


The purpose of the talk is to describe and study the theory of Vuong
(1989) for comparing non-nested structural equation models (SEMs). SEM researchers may be familiar with portions of this theory (via, say, the Vuong-Lo-Mendell-Rubin test for latent class models), but implementation difficulties have prevented full application of the theory to SEM thus far.  We resolve these difficulties and illustrate the formal statistical tests that result, allowing us to (i) identify situations where non-nested candidate models cannot be distinguished from one another, (ii) identify situations where non-nested models provide different fits to a population of interest, and (iii) obtain robust test statistics for nested model comparison.  To aid in illustration, we will demo the free R package "nonnest2" and also consider applications to SEM for ordinal data (with direct relevance to item response modeling).

Attachments: 
Weekly Colloquium

Using Animal Instincts to Find Efficient Experimental Designs

Weng Kee Wong, Professor of Biostatics, UCLA
Friday, November 21, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Experimental costs are rising and it is important to use minimal resources to make statistical inference with maximal precision. Optimal design theory and ideas are increasingly applied to address design issues in a growing number of disciplines, and they include biomedicine, biochemistry, education, agronomy, manufacturing industry, toxicology and food science, to name a few.
I first present a brief overview of optimal design methodology and recent advances in the field. Nature-inspired meta-heuristic algorithms are then introduced to find optimal designs for potentially any model and any design criterion. This approach works quite magically and frequently finds the optimal solution or a nearly optimal solution for an optimization problem in a very short time. There is virtually no technical assumption required for the approach to perform well and the user only needs to input a few easy tuning parameters in the algorithm. Using popular models from the biopharmaceutical sciences as examples, I show how these algorithms find different types of optimal designs for dose response studies, including mini-max types of optimal designs where effective algorithms to find them have remained stubbornly elusive until now.

Weekly Colloquium

Quantifying and controlling sampling error in Monte Carlo studies of statistical properties

Adam Hafdahl
Friday, November 14, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Monte Carlo studies are often used to study statistical properties of a point estimator (e.g., bias, mean squared error), interval estimator (e.g., coverage probability, width), or test (e.g., Type I or II error).  These simulation experiments' estimates of a given property are subject to sampling error, especially with relatively few replications.  Quantifying this sampling error can be useful in planning, analyzing, and reporting Monte Carlo studies, such as to choose the number of replications, compare performance to a reference value or among conditions, or supplement reported estimates with standard errors or confidence intervals.  In this talk I describe and demonstrate strategies for quantifying and controlling Monte Carlo sampling error, with particular attention to a delta-method approach for properties that are functions of a point estimator’s low-order moments.  Partial derivatives needed to implement this approach are provided for several popular properties.

 

Weekly Colloquium

Sweaving Documents for Sustainable/Replicable R Reports

Paul Johnson, CRMDA
Friday, October 31, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

This will demonstrate, with LyX

  • How to create Beamer presentations.
  • How to Sweave inside a Beamer document (that is, adapt the LyX Beamer template to allow the use of R code, the results of which will appear in the final result)
  • A series of customizations that are used in most of the presentations under http://pj.freefaculty.org/guides/stat, including the Sweavel LaTeX style.

Some of the material to be presented is already available at http://pj.freefaculty.org/latex. Links to the new Beamer-Sweave material will be available there, but the main development work will be available under http://pj.freefaculty.org/guides/Computing-HOWTO/LatexAndLyx.

Weekly Colloquium

Introduction to xxM: n-level Structural Equation Modeling

Pascal Deboeck
Friday, October 24, 2014 -
3:00pm to 5:00pm
Watson Library, Room 455

In recent years, there has been increasing interest in combining two mainstays of statistical analysis in psychological research: multilevel models and structural equation modeling. The close match to theory of models that can combine latent variables and random effects seems promising. Until recently, these models could only be examined with expensive programs such as Mplus. Through a grant from the Institute of Education Sciences new software for multilevel structural equation modeling, xxM, is now available. The xxM software is a free package available for the statistical program R. xxM distinguishes itself from existing multilevel structural equation modeling software by allowing for any number of levels of data, and moreover data with complex nested structures such as cross-classified data, partially nested data, round-robin designs, and longitudinal data with switching classifications.

 

Fitting latent variable, random effect models to data with complicated nesting structures necessitates data and model structures which are somewhat unfamiliar relative to most structural equation modeling and multilevel modeling software. This presentation will introduce the general steps required to produce and xxM model and provide examples of simple xxM models. Attendees who would like to follow along with the code (to be provided) are encouraged to download xxM from http://xxm.times.uh.edu (requires registration), and install the package using install.packages("C://path-to-download//xxm.zip”). To the best of the presenter’s knowledge, xxM is currently only available for Windows 32-bit R. 

Weekly Colloquium

Mplus Automation Part 2

Terry Jorgensen and Benjamin Kite
Friday, October 3, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Executing Mplus from R, retrieving estimates.

Weekly Colloquium

Mplus Automation Part I

Terrence Jorgensen
Friday, September 26, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Setting up R programs to launch Mplus with Mplus Automation

Weekly Colloquium

A Comparison of Bayesian and Frequentist Approaches for Estimating a Continuous-Time Model with Discretely-Observed Panel Data

Aaron Boulton
Friday, May 9, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Continuous-time models are used to model phenomena in many areas of science. In psychology and other social sciences, however, continuous-time models are difficult to apply given the small number of repeated observations typically available. In such cases, one promising approach that has been suggested is the exact discrete model (EDM)—a set of mathematical relations that connect the parameters of a discrete-time autoregressive cross-lagged (ARCL) panel model to those in an underlying continuous-time model. To date, several frequentist approaches have been developed for estimating continuous-time model parameters via the EDM. Bayesian estimation methods, however, have not yet been explored. The purpose of this project was to outline a Bayesian implementation of the EDM with non-informative priors and compare its performance to two alternative frequentist approaches—the EDM-SEM (Oud & Jansen, 2000) and Oversampling (Singer, 2012) methods—under proper model specification and variable experimental conditions. Data were generated under different combinations of sample size, number of observation time points, and population parameter value configurations for a bivariate panel model. In addition, starting values for the frequentist methods were set to the data generating values or were randomly perturbed. Results suggest that the three estimation approaches produced equivalent results at moderate and large sample sizes. The Bayesian implementation resulted in fewer non-converged and improper solutions compared to the frequentist approaches in nearly all experimental conditions. Parameter estimates were slightly less biased and more efficient under frequentist estimation at small sample sizes. In contrast, the Bayesian implementation generally provided equivalent or better interval coverage across all conditions. Finally, differences were found between frequentist and Bayesian estimation with regard to Type I error rates of the SEM-based chi-square fit test statistic. To summarize, preliminary support for Bayesian estimation of the EDM with panel data under a variety of experimental conditions was found; in addition, the Oversampling approach appears to be a particularly robust technique within the frequentist paradigm. Alternative prior specifications and sampling algorithms for the Bayesian implementation, modeling extensions, and the performance of these approaches under less ideal analytic conditions are important areas for further study.

Weekly Colloquium

Time for a change: Explorations, trials, and tribulations of continuous time modeling.

Joel Steele, PhD
Friday, April 25, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

There are, arguably, as many approaches to continuous time modeling as there are possible benefits. The focus of this presentation is on the statistical modeling of dyadic interactions in continuous time. I will present a number of different approaches to using differential equations for dyadic interactions data collected daily from romantic couples. The journey begins with explanation of the underlying theoretical model and, from there, continues with a look at an early Bayesian approach to modeling the system. Next, an examination of dyadographic (dyad-by-dyad) analysis is presented using the same underlying model with different a priori specifications that, when combined, can be extended into predictive linear models. Lastly, the inclusion of stochastic input, or random shock, to the governing dynamics are modeled explicitly via the LSDE (Linear Stochastic Differential Equations) package written in SAS/IML. This ostensibly challenging model becomes more tractable via confirmatory simulations that help to clarify the unique specifications necessary for modeling in this framework.

Weekly Colloquium

Why We Don't Need Stevens' Theory of Scales of Measurement to Link Parameters in IRT Models

Dr. Wim Van Der Linden
Friday, April 18, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Stevens’ theory of scales of measurement has been deeply ingrained into the minds of social and behavioral  scientists, for instance, in the form of the common belief that we need to link parameters in IRT models to correct for different units and zeros in different item calibrations. In this presentation I will argue that we don’t need these notions, but that the necessity of parameter linking is due only to a fundamental problem inherent in the formal structure of these models—their general lack of identifiability. More specifically, I will explore the nature of the identifiability problem, characterize the formal shape of linking functions for our common response models, discuss their specific shapes for different parameterizations of the 3PL model, shed some light on existing linking methods as the mean/mean, mean/sigma, and Stocking-Lord methods and present an alternative to them.

Weekly Colloquium

Power of Alternative Fit Indices for Multiple Group Longitudinal Tests of Measurement Invariance

Steve Short
Friday, March 28, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Measurement invariance testing with confirmatory factor analysis has a long history in social science research, and more recently has increased use and popularity. The current presentation begins by  very briefly reviewing the steps for measurement invariance testing via multiple group confirmatory factor analysis, and synthesizing previous research recommendations for model testing, including the chi-square difference test, and examining change in model fit indices. Previous research on measurement invariance testing has examined change in alternative fit indices such as the CFI, TLI, RMSEA, and SRMR, but these studies had not examined power to detect invariance when more than two groups exist and multiple time points are present. The present study  implemented a Monte Carlo simulation to examine the power of change alternative fit indices to detect two types of measurement invariance, weak and strong, across a variety of manipulated study conditions including sample size, sample size ratio, lack of invariance, location of noninvariance, magnitude of noninvariance, and type of mixed study design.

Weekly Colloquium

A Comparison of Imputation Strategies to Missing Ordinal Item Scores

Fan Jia, Wei Wu and Craig Enders
Friday, March 14, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

Ordinal items are common in social and behavioral science research. When there are missing data on ordinal items, multiple imputation can be used to fill in the missing data. In this paper, we compare different imputation strategies for missing ordinal item scores including imputing ordinal data as continuous normal data (normal model), rounding imputed continuous data to the nearest integral (naïve rounding), latent variable approach, and using models designed for categorical variables such as discriminant analysis, ordinal and multinomial logistic regression models. We find the normal model approach and the latent variable approach outperformed the other approaches in reproducing the relationship between scale scores and the reliability coefficients for the scales under the examined conditions.

 

Weekly Colloquium

Interpretable Data and Capturing Metadata During the Research Process – A Prototype SAS Addin

Dr. Larry Hoyle
Friday, March 7, 2014 - 3:00pm
Watson Library, Room 455

In order to be usable, data must be paired with information about the data (metadata), like the universe from which the observations for a measure was drawn, the wording of a question asked, transformations done to the variable and so on. Statistical software packages have not traditionally had the capacity to store this sort of information along with the data. Now, though, many of the major packages (SAS, R, Stata, SPSS, and even lowly Excel) have the capability of storing metadata as name, value pairs associated with either the dataset as a whole or with individual variables.

This talk will explore the possibility of using this capability in conjunction with the most widely used metadata standard for the social sciences, Data Documentation Initiative Lifecycle (DDI-L). It will also include a demonstration of a new tool developed as a SAS Enterprise Guide Addin, designed to facilitate metadata entry.

Weekly Colloquium

Bare Minimum Setup for LaTeX Document Preparation

Paul Johnson
Friday, February 14, 2014 -
3:00pm to 4:00pm
Watson Library, Room 455

This session is for people that have never tried LaTeX--possibly have not heard of it before. This is the right place for curious users to get their feet wet. Make sure to bring your laptops!

The materials from the presentation are attached. This software only allows us one upload, so 3 documents are zipped together.

LaTeX-lecture-1.pdf is the general overiew of LaTeX.

We suggest new users try the editor LyX, a GUI document preparation system. For people that need to transition onto LaTeX document preparation in a "flat text editor," we suggest LyX is a good training system. As a preparation for the next step, we consider users might review these writeups about LyX and the tansition from an empty document to a personalized LaTeX template. Those documents are named:

Session-1-notes.pdf

template-20140210.pdf

In case you want the source documents for those presentations, or to find out if updates have become available, check http://pj.freefaculty.org/guides/Computing-HOWTO/LatexAndLyx. In particular, it seems likey one might like to open the LyX template document that is represented in the pdf. Look in the folder: Computing-HOWTO/LatexAndLyx/LyX-Begin/ for the file template-20140210.lyx  or template-20140210.tex

Weekly Colloquium

Exploring non-linear associations between nurse staffing and patient assaults in psychiatric units using cubic splines in a three-level generalized mixed model for longitudinal, clustered, over-dispersed count data

Vincent Staggs, Research Assistant Professor, Dept. of Biostatistics, University of Kansas Medical Center
Friday, December 6, 2013 - 3:00pm
Watson Library, Room 455

I will discuss restricted cubic splines and methods for modeling clustered, non-Gaussian data in the context of a study of violence on psychiatric units in 255 U.S. hospitals. Topics will include exposure and offset variables, detecting and dealing with over-dispersion, SAS’s GLIMMIX Procedure, and measures of model fit and explanatory power. The talk will be geared toward applied researchers, with emphasis on concepts and practice rather than on statistical theory. Research Assistant Professor Department of Biostatistics University of Kansas Medical Center

Attachments: 
Weekly Colloquium

Inference in Mixed-Effects (and other) Models Through Profiling the Objective

Douglas Bates, Department of Statistics, University of Wisconsin
Friday, November 8, 2013 -
3:00pm to 4:00pm
Watson Library, Room 455
ABSTRACT: A discussion of statistical theory and the practical challenges in the interpretation of parameter estimates in "random effects" (or "mixed") models. The profile likelihood method is introduced and illustrated with several examples. This is a practical way to derive confidence intervals. Several examples using the tools in the new R package lme4* are presented. The analysis sheds light on the long-standing problems in the analysis of mixed models, in particular, the difficulty in interpreting estimates of variance components. * Douglas Bates, Martin Maechler, Ben Bolker and Steven Walker (2013). lme4: Linear mixed-effects models using Eigen and S4. R package version 1.0-4. http://CRAN.R-project.org/package=lme4
Attachments: 
Weekly Colloquium

Fitting and Analyzing Mixed-Effects Models in R with lme4

Douglas Bates, Department of Statistics, University of Wisconsin
Friday, November 8, 2013 -
1:00pm to 2:30pm
Watson Library, Room 455

ABSTRACT: A presentation about the lme4 package* for R. lme4 offers a convenient user interface in which to estimate regression models that include random effects. It extends the ability to estimate random effects to the broader class known as the generalized linear model (McCullagh and Nelder, Generalized Linear Models, 2ed, Chapman and Hall, 1989). Methods for creating graphs to visualize random effects are a priority in lme4, as are development of statistical frameworks for the evaluation of uncertainty in those models. * Douglas Bates, Martin Maechler, Ben Bolker and Steven Walker (2013). lme4: Linear mixed-effects models using Eigen and S4. R package version 1.0-4. http://CRAN.R-project.org/package=lme4

Attachments: 
Weekly Colloquium

Muscle Fatigue, Electromyography and Wavelet Analysis (Now What?)

Joseph Weir, Professor, Health, Sport, and Exercise Science, Univerity of Kansas
Friday, November 1, 2013 -
3:00pm to 4:00pm
Watson Library, Room 455

Abstract:

Muscle fatigue, defined as an exercise-induced decrease in muscle force/power production capability, manifests itself in a variety ways. One of these ways is changes in muscle electrical activity. The stimulus for muscle contraction is an action potential on the muscle cell membrane, which involves the flow of sodium and potassium ions across the membrane. Muscle electrical activity can be assessed using electromyography (EMG), and changes in both the amplitude and frequency-domain characteristics of the EMG signal occur with fatigue. In general, muscle fatigue is manifested in the EMG signal as frequency compression where more signal energy occurs at lower frequencies. Typically, changes in the frequency-domain of the EMG signal have been quantified using standard power spectrum analysis with the Fourier transform, however Fourier analysis assumes stationary data. Joint time-frequency domain methods, such as wavelet analysis, have been employed to more adequately characterize these changes in the EMG signal. With wavelet analysis, one can generate an intensity plot with time on the x-axis and wavelet scale (each scale represents a frequency band) on the y-axis. The resulting picture (intensity plot) is a 2-D array of numbers that requires further analysis. However, the statistical analysis of these data is beyond the skill set of the typical physiologist. Statistical methods accessible to physiological researchers would improve the analysis of these types of data and help move the field forward.

Weekly Colloquium

An Introduction to Spatial Econometric Models and Methods in Social Science Research

Jacob Fowles, School of Public Affairs, University of Kansas
Friday, October 25, 2013 -
3:00pm to 4:00pm
Watson Library, Room 455

Abstract:

The most basic regression specifications assume that outcomes for different observations occur independently.  When conducting research in the context of the social sciences and related disciplines, we may have many reasons to suspect that this strong assumption does not hold.  This talk provides an overview of this problem as well as introduce the common approaches taken in the literature to addressing it.

 

Weekly Colloquium

From points to areas: concepts and methods in species distributions modeling.

Jorge Soberon, Professor, Ecology & Evolutionary Biology, University of Kansas
Friday, October 18, 2013 -
10:00am to 11:00am
Watson Library, Room 455

Abstract:

A very general problem is to estimate the geographic area of existence of some phenomena, based of point-like reports of occurrences. In Ecology this is the problem of Species Distributions Modeling (SDM), that can be addressed by conventional methods, based on the coordinates of the observations, or on less conventional approaches that depend on advanced regression methods or even machine learning techniques. I will describe the problem, and the illustrate a few of the methods, emphasizing a conceptual framework that can be used to understand better what the different techniques do.

Weekly Colloquium

To Thin or Not to Thin? The Impact of Thinning Posterior Markov Chains on Parameter Estimation in Latent Trait Models

Jared Harpole
Friday, May 3, 2013 -
3:00pm to 4:00pm
Holiday Inn Holidome, Brazilian Room

The practice of thinning MCMC chains for item response theory (IRT) models has mixed reviews in the literature. Recently, Link and Eaton (2012) found that thinning MCMC chains from a t-distribution produced more biased estimates versus not thinning. The purpose of the present talk is to discuss the results of a simulation study extending the work of Link and Eaton (2012) to include the impact of thinning versus not thinning on parameter estimation when fitting a 2PL IRT model.

This talk will include a brief introduction to Bayesian estimation and Gibbs Sampling, provide background on thinning, and discuss findings and implications from the simulation study.

Weekly Colloquium

Rankings Based on Test Scores vs. Rankings Based on Latent Variable: When are they the same?

Roger E. Millsap, Professor, Psychology, Arizona State University
Friday, April 26, 2013 -
3:00pm to 4:00pm
Watson Library, Room 3 West Reading Room

Latent variable models represent the statistical relationships between measured variables (items, subtests, parcels) and the latent variables assumed to underlie those measures. Suppose that a single latent variable W underlies a set of measures X, and that we create a test score Y that is the sum of the measures. At minimum, we would like it to be true that when we rank order people on Y, that ranking induces a rank-ordering on W that is proper in some sense. Under what conditions will this be true? This is a question of stochastic ordering, and conditions leading to various types of stochastic ordering have been studied previously. We know, for example, the conditions under which certain ordering properties will hold when the measures X are binary items. Little is known however for the case in which the measures X are not binary, and the latent variable model is the common factor model, although this case is widely used in psychology. This case will be addressed here, and the conditions leading to a useful ordering property will be described. It is shown that the common factor model need not imply any useful ordering properties generally, but it can do so under some conditions that have not received attention previously in this context.

Weekly Colloquium

Tests of measurement invariance along continuous and ordinal auxiliary variables

Ed Merkle, Assistant Professor, University of Missouri
Friday, April 19, 2013 -
3:00pm to 4:00pm
Watson Library, Room 455

The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests typically require advance definition of group membership and number of groups (via an auxiliary variable), along with specification of the model parameters that potentially violate measurement invariance. In this talk, I study tests of measurement invariance that use individuals' scores (i.e., casewise gradients of the likelihood function) from the estimated factor analysis model. These tests can be viewed as generalizations of the Lagrange multiplier test, and they are especially useful for (1) isolating specific parameters affected by measurement invariance violations, (2) identifying subgroups of individuals that violated measurement invariance, and (3) developing novel statistics geared towards ordinal auxiliary variables. The tests are described in detail and illustrated via both simulation and application.

Tests of Measurement Invariance Without Subgroups: A Generalization of Classical Methods

Weekly Colloquium

Statistical Inference from Nonrandom Samples in Psychological Studies

Sunthud Pornprasertmanit
Friday, April 12, 2013 -
3:00pm to 4:00pm
Watson Library, Room 455

This presentation discusses two popular frameworks in statistical influences: the Fisher’s model-based inference and the Neyman’s design-based inference. We show that both frameworks have restricted requirements which are not often satisfied with the current practice of collecting sample in psychological studies. As a result, accurate statistical inferences cannot be made. To mitigate the problem, we propose a practical approach that researchers may use for valid statistical inference with less-restricted requirements. A simulation study is conducted to evaluate the proposed approach. The importance of having a well-defined population is also emphasized in the presentation.

Weekly Colloquium

Streamlining Missing Data Analysis: Results from the First Rigorous Exploration of the SuperMatrix Technique

Kyle Lang
Friday, March 29, 2013 -
3:30pm to 4:30pm
Watson Library, Room 455

This talk will present the initial findings of a line of research aimed at streamlining missing data analysis by simplifying the use of multiple imputation. A novel analysis strategy was developed (i.e., the SuperMatrix Approach), and a Monte Carlo Simulation Study was conducted to assess the tenability of the proposed technique. Through aggregating multiply-imputed data sets prior to model estimation, the SuperMatrix (SM) Approach was envisioned as a way for researchers to reap the benefits of a principled missing data tool (i.e., multiple imputation), while maintaining the simplicity of complete case analysis. The ability of the SM Approach to produce accurate estimates of model fit, and related quantities, will be discussed. These SM-based estimates of model fit will be judged against estimates derived from two comparison conditions. The first comparison condition was based on a simple, naïve average the multiple estimates of model fit, and the second comparison conditions was based on FIML estimation. Specifically, empirical convergence rates, assessment of direct model fit, and the accuracy of Change in Chi-Squared values derived from each of the three techniques will be scrutinized. Finally, implications, limitations and future directions of the current work will be discussed.

Weekly Colloquium

An efficient state space approach to estimate univariate and multivariate multilevel regression models

Fei Gu
Friday, March 29, 2013 -
2:30pm to 3:30pm
Watson Library, Room 455

Estimating multilevel regression models as structural equation models was thoroughly discussed by Bauer (2003) and Curran (2003). Based on the equivalence between structural equation models and state space models (e.g., Chow, Ho, Hamaker, & Dolan, 2010), the state space formulation for the multilevel regression models can be derived by a direct translation of the corresponding structural equation formulation. In this paper, instead of translating the existing structural equation formulation, we introduce a more efficient state space approach to estimating multilevel regression models. Though the state space approach has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. To the best of our knowledge, the state space approach to estimating multilevel regression models is barely known, and (almost) never implemented, by multilevel modelers in education and psychology. We first provide a brief outline of the state space formulation. Then, state space forms for univariate and multivariate multilevel regression models are illustrated, and the utility of the state space approach is demonstrated with both simulated and real examples. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach is a much efficient treatment for multilevel regression models.

Weekly Colloquium

ASSISTments viewed as a Shared Scientific Instrument chartered to Provide both Good Science and a Free Public Service

Neil Heffernan, Associate Professor, Computer Science Department, Worcester Polytechnic Institute
Friday, March 15, 2013 -
3:00pm to 4:00pm
Joseph R. Pearson Hall, Room 150

In this talk I will describe a shared scientific instrument being used by multiple universities to do cognitive science research on human learning in K-12 schools. The web-based platform is called "ASSISTments" and is used by teachers and their students for 1) nightly homework support and 2) in-class formative assessment and differentiated instruction. Our schools think of ASSISTments as a valuable free public service, for this talk, I will address the different types of ways we use ASSISTments to measure knowledge, and the different randomized controlled experiment being conducted with ASSISTments. I will give multiple examples of the different kinds of ways researchers can use this system. Currently, we have 147 randomized controlled experimenters that conduct very short (minutes-long) experiments comparing different types of feedback. I will discuss an NSF funded 8 year-long longitudinal tracking study we are doing in cooperation with Professor Ryan Baker at Teachers College. Another example I will give will be a US Dept of Ed 10 million dollar math study where, in cooperation with Jim Pellegrino and Susan Goldman at UIC, we are exploring applying cognitive science principles to improve a commonly used middle school math textbook (Connected Math). I will talk about data mining work done with ASSISTments data showing how data collected with the system yields more accurate predictions of student knowledge. I will talk about the Efficacy Trail SRI is doing to see if ASSISTments can be used to raise Smarter Balance test scores. I will also talk about joint work with Zach Pardos in which Zach won an award in the KDD Cup challenge in predicting student performance. Zach’s use of Bayes Nets is to track knowledge is maybe of interest to Dynamic Maps. Finally, I will talk about the online professional development work that Gates Foundation is funding us to do as part of our plan to scale-up to 1 million children and the PD work we are doing with the ~80 teachers a week asking for accounts.

Weekly Colloquium

The Geometry of Multivariate Statistics

Victoria Savalei, The University of British Columbia
Friday, March 8, 2013 -
3:00pm to 4:00pm
Watson Library, Room 3 West Reading Room

There are two geometric representations of multivariate data: the plotting of subjects in the variable space and the plotting of variables in the subject space. While the former is well known and often used, the latter is rarely taught, yet can yield fascinating insights into many multivariate statistical procedures. In this teaching talk, I will review the subject space representation of multivariate data (i.e., viewing variables as vectors). I will show, by reviewing a bit of elementary vector geometry, that most statistical formulas are absolutely equivalent to a definition or a theorem in geometry. Time permitting, various multivariate techniques will then be illustrated from this perspective, most importantly multiple regression (with ANOVA as a special case) and principal component analysis. Even though these will be reviewed during the talk, reviewing concepts such as the definition of a vector, how to compute the length of a vector, how to add and subtract two vectors, how to compute the angle between two vectors, the definition of an inner product, and Pythagorean theorem, will help the audience get more out of the talk!

Weekly Colloquium

Stochastic Differential Equations and Adaptive Control

Bozenna Pasik-Duncan, Courtesy Professor of EECS, IEEE Fellow & Distinguished Member of IEEE CSS
Friday, March 1, 2013 -
3:00pm to 4:00pm
Watson Library, Room 455

This talk focuses on controlled systems described by stochastic differential equations and adaptive control that includes self-optimizing controls for partially known continuous time stochastic systems in both finite and infinite dimensional spaces. For adaptive control of linear systems the weighted least squares estimators that always converge and are strongly consistent under weak assumptions and provide self-optimal adaptive controls under the natural assumptions of controllability and observability will be presented. The current research extends many of the results for control of stochastic systems with Brownian motion to systems with other more justifiable noise processes such as the family of fractional Brownian motions. In almost every application of control, the controlled system has unknown parameters so there are the fundamental problems of identification of unknown parameters and the simultaneous control of the stochastic system. The extension of optimal and adaptive control results to systems driven by processes other than Brownian motion is particularly important because empirical evidence from physical phenomena demonstrate the necessity of other noise processes such as the family of fractional Brownian motions in the mathematical models.

Weekly Colloquium

Higher order and bifactor models: Issues in model identification, equivalence, and interpretation

David Flora, Associate Professor, Department of Psychology, York University
Friday, February 15, 2013 -
3:00pm to 4:00pm
Joseph R. Pearson Hall, Room 201

Researchers often hypothesize that the covariance structure for a set of psychological variables is organized according to a set of specific, narrow constructs along with general, more broad constructs. Such a hypothesis is typically examined using either a hierarchical factor model (of which the bifactor model is a special case) or higher order factor model. This talk will explain how these two types of models are distinct conceptually, though they do have a formal mathematical relationship. Issues of model identification, equivalence, and interpretation that are not well-recognized by researchers will be emphasized.

Weekly Colloquium

Instructional Sensitivity: What is it? How can we detect it? How can we enhance it?

Neal Kingston
Friday, February 8, 2013 -
3:00pm to 4:00pm
Joseph R. Pearson Hall, Room 201

Student test scores were first used to support test scores about students. Under No Child Left Behind they were used to hold schools accountable. Under Race to the Top they are being used to determine the effectiveness of individual teachers. Now some folks are arguing improvement in student scores should be used to determine the effectiveness of schools of education. Underlying all of these uses is the assumption that teachers have substantial impact on student test scores. Instructional sensitivity is the extent to which a test item can be influenced by good instruction. While there have been few studies of this phenomenon, those studies have identified few items that possess this characteristic. In this talk Neal Kingston will share information about a program underway at CETE and make a call for more methodological and substantive research.

Weekly Colloquium

Advances in Meta-analysis

Terri Pigott, Professor & Associate Dean, School of Education, Loyola University Chicago
Friday, February 1, 2013 -
3:00pm to 4:00pm
Watson Library, Room 3 West Reading Room

This talk will introduce new advances in the methods for meta-analysis. As the use of meta-analysis increases, the complexity of the studies included in a research synthesis has spurred the development of new methods for synthesizing study results. These new methods include strategies for synthesizing the results of diagnostic tests, the inclusion of both aggregated data and individual participant data in a meta-analysis, and computing power for meta-analytic statistical tests.

Weekly Colloquium

Mood Changes Associated with Smoking in Adolescents: An Application of a Mixed-Effects Location Scale Model for Longitudinal Ecological Momentary Assessment (EMA) Data

Donald Hedeker, Ph.D.
Friday, November 16, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

For longitudinal data, mixed models include random subject effects to indicate how subjects influence their responses over the repeated assessments. The error variance and the variance of the random effects are usually considered to be homogeneous. These variance terms characterize the within-subjects (error variance) and between-subjects (random-effects variance) variation in the data. In studies using Ecological Momentary Assessment (EMA), up to thirty or forty observations are often obtained for each subject, and interest frequently centers around changes in the variances, both within- and between-subjects. Also, such EMA studies often include several waves of data collection. In this presentation, we focus on an adolescent smoking study using EMA at several measurement waves, where interest is on characterizing changes in mood variation associated with smoking. We describe how covariates can influence the mood variances, and also describe an extension of the standard mixed model by adding a subject-level random effect to the within-subject variance specification.

This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses.  Additionally, we allow the location and scale random effects to be correlated. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.

Weekly Colloquium

Life is a Quasi-Experiment: Propensity Risk-Set Matching for Estimating the Causal Effects of Transitional Events on Developmental Outcomes

Katherine Masyn, Harvard University
Friday, November 9, 2012 -
3:00pm to 4:00pm
Watson Library, Room 3 West Reading Room

There are four primary challenges to researchers wishing to examine the potential causal impact of transitional events during the life course on developmental outcomes, such as the impact of first incarceration on the trajectory of antisocial behavior or the impact of birth of first child on the trajectory of relationship satisfaction among married/cohabiting biological parent dyads: 1) The transitional event of interest may not be universal (i.e., not everyone in the population necessarily experiences the event); 2) Those individuals who do experience the event may be systematically different than those who do not in ways related to the longitudinal outcome the event is hypothesized to influence; 3) The timing of the transitional event (i.e., when the event occurs during the life course) may differ among those who do experience the event and there may be shared antecedent causes between event timing and the post-event trajectory; and 4) The timing of the event may itself may be related to the post-event trajectory. These challenges are not exclusive to the domains of observational and quasi-experimental longitudinal studies as even in experimental studies it may only be possible to randomize the “treatment” but not the timing of the treatment.

Propensity score matching (PSM) is one method available for enabling the estimation of causal effects in studies with non-random treatment uptake. However, traditional uses of PSM involve a fixed-time “treatment” and a fixed-time outcome, yielding only one counterfactual outcome for each individual (i.e., [outcome|treatment] or [outcome|control]) and rendering them less suitable for time-dependent events. Propensity score risk-set matching overcomes this limitation, enabling matching even in settings for which the event of interest occurs in continuous time. The technique described in this talk is a particular kind of risk-set matching known as sequential risk-set matching. Although risk-set matching is not new, it is rarely used in social and behavioral research. Further, the applications of the technique thus far have been limited to estimating the causal effects of a time-dependent event on a single post-event outcome, rather than on a longitudinal outcome. In this talk, I will review the details of sequential risk-set matching and present an extension of the technique that enables the estimation of the causal effects of the time-dependent event on a longitudinal change process. This new approach is illustrated with real data to estimate the effect of the timing of joining a gang on substance use trajectories in adolescence and the effect of the timing of obtaining a GED on wage trajectories of high school dropouts

Watch Presentation

Weekly Colloquium

Why the Items versus Parcels Controversy Needn’t Be One

Todd Little
Friday, October 19, 2012 -
3:00pm to 4:00pm
Watson Library, Room 3 West Reading Room

The use of item parcels has been a matter of debate since the earliest use of factor analysis and structural equation modeling. Here, we review the arguments that have been levied both for and against the use of parcels, and discuss the relevance of these arguments in light of the building body of empirical evidence investigating their performance. We discuss the many advantages of parcels that some researchers find attractive and highlight, too, the potential problems that ill-informed use can incur. We argue that no absolute pro or con stance is warranted. Parcels are an analytic tool like any other. There are circumstances in which parceling is useful and times when parcels would not be used. We emphasize the precautions that should be taken when creating item parcels and interpreting model results based on parcels. Finally, we review and compare several proposed strategies for parcel building, and suggest directions for further research.

Weekly Colloquium

Fitting nonstationary time series data: A two-step procedure versus the true state space model

Fei Gu
Friday, October 5, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

Fitting general state space models usually requires extensive programming skills. Using existing software packages, a two-step procedure is proposed and compared to the true state space model in a simulation study with nonstationary time series data. For the model considered in this study, the two-step procedure provided acceptable point estimates, while the standard errors are greatly inflated. In addition, it is showed that the two-step procedure is very easy to implement and extremely fast. Based on the results of the simulation study, it is concluded that the two-step procedure provides an easy access for researchers to obtain useful preliminary results in multivariate time series data analysis.

Weekly Colloquium

Testing for Differential Item Functioning on the Brief Fear of Negative Evaluation Scale with Straightforwardly-worded items (BFNE-S) when Local Independence Fails

Jared Harpole
Friday, September 28, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

The present talk summaries an investigation of testing the BFNE-S for differential item functioning (DIF) using IRT. The BFNE-S has been used for evaluating fear of negative evaluations on many different populations. To date no studies have evaluated the BFNE-S for DIF with respect to gender and ethnicity to ensure that the instrument is functioning the same across heterogeneous groups. In the course of evaluating DIF in the BFNE-S the assumption of local independence appeared to have been violated. A procedure was used to propose a revised version of the BFNE-S that met the local independence assumption. Tests for DIF were then carried out and the results of this analysis are summarized.

Weekly Colloquium

Personality Measurement in High Stakes Settings: Using IRT Methods to Improve the Accuracy and Validity of Scores

Stephen Stark, University of South Florida
Friday, September 21, 2012 -
3:00pm to 4:00pm
Summerfield Room, Adams Alumni Association

Personality constructs have been hypothesized to predict a variety of important outcomes in educational, organizational, and military settings, but for many years, concerns about faking precluded high stakes uses. This talk will summarize research aimed at developing fake-resistant personality measures based on multidimensional IRT methods for test construction and scoring. Laboratory, field, and simulation research will be presented to show the validity of multidimensional pairwise preference test scores, the benefits of adaptive item selection, and the potential for detecting aberrant responding using appropriateness measurement indices.

Weekly Colloquium

From Modeling Long-Term Growth to Short-term Fluctuations: Differential Equations are the Language of Change

Pascal Deboeck
Friday, September 14, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

Many applied statistical problems seek to address how the change in one variable is related to change in another variable. While the change of one variable with respect to another is the very definition of a derivative, the language of derivatives is often relegated to maximization and minimization problems rather than commonplace discussion of models and applied theories. This presentation will first discuss derivatives as a language framework that is ideal for describing changes in variables, particularly changes with respect to time. This language can be used to understand many common models as relationships between derivatives rather than as seemly disparate entities. Derivatives can also be used to provide statisticians and applied researchers a common language that can be used to create better matches between models and theory. Second, this presentation will present derivatives as a language that has the potential to change the kinds of questions researchers ask from variables measured repeatedly over time. Examining commonly used models as relationships between derivatives highlights relationships that are rarely explored, particularly when modeling short-term fluctuations. Questions that can be asked through modeling of the relationships between derivatives and methods for implementing these models will be introduced.

Weekly Colloquium

Psychometric Society posters redux

Kelly Crowe, Richard Kinai, Whitney Moore, & Graham Rifenbark
Friday, September 7, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455
Weekly Colloquium

Metapalooza 2012: In-Progress Contributions to Meta-Analysis Methodology

Meta-Analysis Methods Workgroup
Friday, May 4, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

One aim of the CRMDA methods workgroup on meta-analysis is to advance methodological knowledge in meta-analysis and research synthesis and promote responsible use of these methods. This joint presentation by workgroup members will highlight ongoing investigations that support this aim. Stephen Short will describe the recent surge in meta-analysis publications over the past few decades and discuss potential ways of examining how psychologists perceive and use these meta-analyses. Alex Schoemann will discuss issues associated with dependent effect sizes in meta-analysis and strategies to model dependent effect sizes. Ian Carroll will give a brief presentation explaining and demonstrating methods for meta-analysis of indirect effects. Adam Hafdahl will describe and demonstrate how to use a mixture of posterior distributions to estimate a distribution of effect-size parameters and features of this distribution. Audience questions and constructive feedback are encouraged; crowd surfing is not.

Weekly Colloquium

Recent Advances in Model Evaluation

Aaron Boulton, Sunthud Pornprasertmanit, Terry Jorgensen, and Mauricio Garnier-Villarreal
Friday, April 20, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

Two goals of the model evaluation work group are (1) to evaluate existing methods of evaluating structural equations models and (2) to develop new methods that account for drawbacks of existing methods. Terry Jorgensen will present recent research on the behavior of the posterior predictive p value (a fit index available in Bayesian SEM) under conditions of increasing model misfit, sample size, and level of informative priors in continuous-indicator confirmatory factor analysis. Mauricio Garnier-Villarreal will present the simulation on the posterior predictive p value in the categorical confirmatory factor analysis. Aaron Boulton will present research on a new index of fit for model comparison which takes into account model parsimony, but improves on past information-criteria (e.g., AIC and BIC, which define parsimony only in terms of the number of estimated parameters) by quantifying parsimony in terms of a model's overall fitting-propensity.

Weekly Colloquium

Six level structural equation model for a single dependent variable

Paras Mehta, University of Houston
Friday, April 13, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

Two-level structural equation modeling (2L-SEM) has become common place for fitting SEM models to clustered multivariate data. Conventional, multilevel modeling (MLM) accommodates hierarchical/non-hierarchical data-structures for a single dependent variable. However, currently available multilevel modeling framework and software simply cannot deal with many important but non-standard dependent data-structures such as "networks-in-groups".

The goal of this presentation is to introduce a general nLevel SEM framework that allows estimation of fairly complex models straightforward. The conceptual simplicity of the framework and the accompanying software are due to following features:

(a) While the underlying framework relies on LISREL like matrices, there is a graphical representation of the model. This allows models to be formulated in a WYSIWYG fashion. 
(b) The software uses a “relational-database” framework to formulate relationships among the data collected from multiple entities with complex dependencies. This approaches allows the user to supply data in the ‘most obvious’ format rather than demanding that the data be supplied in an unnatural and idiosyncratic format (e.g., univariate format in Proc Mixed).
(c) The framework itself includes novel modeling constructs such as ‘virtual levels’ and ‘role models’. These constructs make it possible to formulate a model just as the researcher may conceptualize it. In the absence of these constructs, the software specification tends to be opaque to all but the most advanced users.
(d) The framework uses Lego modeling as a metaphor for constructing complex models from simpler components. The construction of basic components is straightforward and follows standard SEM framework, requiring little new learning. Hence, learning how to construct a two-level model is mostly sufficient to construct a fairly complex model.
(e) The nLevel SEM software uses computational features that make it possible to estimate models with very large numbers of levels and observations at each level.

The general modeling framework is presented in the context of a six-level measurement model for “networks-in-groups” data in which each person within a group rates every other person on one or more dimensions. David Kenny has developed a statistical model called the social-relations model (SRM) for such data. Simpler versions of the model can be estimated using conventional software such as Proc Mixed. However, the model specification is tedious and requires un-documented features of the software. The specification itself bears little resemblance to the underlying statistical model and as such would make sense only to an expert. The nLevel SEM formulation of the social relations model is as easy as drawing a path-diagram.

The presentation will include a practical demonstration of how various models may be specified including the social relations model may be specified using the nSEM software. Participants are encouraged to bring examples of complex data sets. Familiarity with multilevel modeling and ML-SEM modeling software may be useful but not necessary.

Weekly Colloquium

Longitudinal Regime-Switching Models as a Way to Capture Within- and Between-Person Heterogeneities in Change

Sy-Miin Chow, University of North Carolina at Chapel Hill
Friday, April 6, 2012 -
3:00pm to 4:00pm
Watson Library, Room 300

Longitudinal regime-switching models provide one possible way of representing within- and between-person heterogeneities in change by allowing individuals to transition between different latent classes or “regimes” over time. The notion that individuals may manifest quantitatively and/or qualitatively distinct dynamics across different phases of a change process has been a dominant premise of many stagewise developmental theories in psychology. Regime-switching models provide a methodological framework for testing and further extending these theories. Using empirical examples from education, alcohol use and emotions, I will illustrate the utility of such models in enriching our conceptualization of whether and how individuals change over time. The parallels between regime-switching models and other well-known discrete change models in the literature will also be discussed.

Weekly Colloquium

Alternative Procedures to Test Mediation Effect with Missing Data

Wei Wu and Fan Jia
Friday, March 30, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

This presentation overviews and compares procedures to test mediation effect through bootstrapping and advanced missing data techniques including expectation and maximization algorithm, full information maximum likelihood, and multiple imputation. This presentation also proposes a new procedure for testing mediation analysis with missing data through bootstrapping and multiple imputation. The proposed procedure performances as well as the other procedures in terms of producing accurate statistical inference of mediation effect. In addition, it works more efficiently than the existing procedure of combining bootstrapping with MI to test mediation effect.

Weekly Colloquium

lavaan: An R Package for Structural Equation Modeling

Yves Rosseel, Ghent University
Friday, March 16, 2012 -
3:00pm to 4:00pm
Watson Library, Room 300

Structural equation modeling (SEM) is a vast field and widely used by many applied researchers in the social and behavioral sciences. Over the years, many software packages for structural equation modeling have been developed, both free and commercial. However, perhaps the best state-of-the-art software packages in this field are still closed-source and/or commercial. The R package lavaan has been developed to provide applied researchers, teachers, and statisticians, a free, fully open-source, but commercial-quality package for latent variable modeling. In this presentation, I will explain the aims behind the development of the package, give an overview of its most important features, and provide some examples to illustrate how lavaan works in practice.

Weekly Colloquium

Evaluate Absolute Goodness-of-Fit in State Space Model

Fei Gu
Friday, March 9, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

In this talk, a brief historical development of linear state space model (SSM) and the Kalman filter (KF) algorithm will be introduced. Mathematical formulation of SSM will be given, and a SAS/IML program to estimate the parameters of SSM will be illustrated. Relative goodness-of-fit comparing different SSMs are readily available (e.g., AIC, BIC), but there is a need to develop a measure to evaluate absolute goodness-of-fit. Bootstrap procedure will be discussed, and results may be presented depending on my dissertation progress.

Weekly Colloquium

simSEM: A new R package for a Monte Carlo simulation in structural equation modeling framework

Alex Schoemann and Sunthud Pornprasertmanit
Friday, February 17, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

The simSEM package is the R package for helping users to simulate data and analyze data in structural equation modeling (SEM) framework. The direct application of the simSEM package is to a) tailor a fit index cutoff based on a specific hypothesized model, b) find the statistical power to reject a hypothesized model given that the obtained data come from a different population model, and c) find a statistical power of a parameter estimate. This package can impose missing data into simulated datasets. We will show two applications of the simSEM package. First, we will show how to tailor a fit index cutoff to control the amount of Type I error (i.e., a Monte Carlo approach in model evaluation). Second, we will show how to find a statistical power when researchers use a planned missing data design. The statistical power is considered in both the model-evaluation level and the parameter-estimate level.

Weekly Colloquium

Victimization in the Peer Group Exacerbates the link between Preschool Harsh Home Environments and Academic Declines

David Schwartz, University of Southern California
Friday, February 3, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

This paper presents a prospective investigation focusing on the moderating role of peer victimization on associations between harsh home environments in the preschool years and academic trajectories during elementary school. The participants were 388 children (198 boys, 190 girls) who we recruited as part of an ongoing multisite longitudinal investigation. Preschool home environment was assessed with structured interviews and questionnaires completed by parents. Peer victimization was assessed with a peer nomination inventory that was administered when the average age of the participants was approximately 8.5 years. Grade point averages (GPA) were obtained from reviews of school records, conducted for seven consecutive years. Indicators of restrictive punitive discipline and exposure to violence were associated with within-subject declines in academic functioning over seven years. However, these effects were exacerbated for those children who had also experienced victimization in the peer group during the intervening years.

Weekly Colloquium

Of Beauty, Sex, and Power: Statistical Challenges in Estimating Small Effects

Andrew Gelman, Columbia University
Friday, January 27, 2012 -
3:00pm to 4:00pm
Watson Library, Room 455

A discussion of some difficulties (and common mistakes) in the interpretation of parameter estimates and the usage of the concept "statistical significance." This includes a thorough re-consideration of some claims in Kanazawa's article, "Beautiful parents have more daughters: a further implication of the generalized Trivers-Willard hypothesis," in the Journal of Theoretical Biology. It offers suggestions about model specification and endorses multi-level modeling as an exploratory tool that might help researchers to avoid the all-too-frequent mistake of concluding that a variable is not important simply because its "p value" in a preliminary regression is larger than hoped for.

Weekly Colloquium

Matching Methods for Causal Inference in Observational Data

Gary King, Harvard University
Friday, December 2, 2011 -
2:30pm to 3:30pm
Watson Library, Room 455

Matching is an increasingly popular method of causal inference in observational data, but following methodological best practices has proven difficult for applied researchers. We address this problem by giving a simple overview of how matching methods can work to improve your research. We then provide a simple graphical approach for choosing among the numerous possible matching solutions generated by three methods that we also describe: the venerable "Mahalanobis Distance Matching" (MDM), the commonly used "Propensity Score Matching" (PSM), and a newer approach called "Coarsened Exact Matching" (CEM). In the process of using our approach, we also discover that PSM often approximates random matching, both in many real applications and in data simulated by the processes that fit PSM theory. Moreover, contrary to conventional wisdom, random matching is not benign: it (and thus PSM) can often degrade inferences relative to not matching at all. We find that MDM and CEM do not have this problem, and in practice CEM usually outperforms the other two approaches. However, with our comparative graphical approach and easy-to-follow procedures, focus can be on choosing a matching solution for a particular application, which is what may improve inferences, rather than the particular method used to generate it. For more information, see http://j.mp/causalinference

Watch Part One

Watch Part Two

Weekly Colloquium

Modeling multisite longitudinal data with time-invariant and time-varying effects: An application to cognitive decline in Huntington’s disease

Jeffrey D. Long, Department of Psychiatry, Carver College of Medicine, University of Iowa
Friday, November 18, 2011 -
3:00pm to 4:00pm
Watson Library, Room 300

Analyzing data from observational studies can be challenging when there is a relatively complex nesting structure and there are between-subject and within-subject predictor effects. An example is Neurobiological Predictors of Huntington’s Disease (PREDICT-HD), a federally funded study of pre-symptomatic gene-positive persons. PREDICT-HD is a multisite study, meaning participants are nested within several sites; it is also longitudinal, meaning repeated measures are nested within subjects. PREDICT-HD research questions often involve comparisons of initial severity groups that do not change over time (time-invariant), and HD motor diagnosis that does change over time (time-varying). A statistical model appropriate for this context is linear mixed effects regression (LMER). It is shown how random effects in LMER can be used to account for nesting and how static and dynamic predictors can be used to account for time-invariant and time-varying effects. An example is presented in which the Symbol Digit Modalities Test (SDMT) is tracked over time to reveal important differences in cognitive decline based on disease progression. The opportunity is taken to illustrate a particular philosophical approach to applied data analysis based on model comparison with Akaike’s Information Criterion (AIC). It is hoped that the audience will reflect on how LMER might be applied to some of their own data analysis problems.

Watch Part One

Watch Part Two

Weekly Colloquium

Tethering Longitudinal Methods to Developmental Theory: Three Small Suggestions

Kevin Grimm & Nilam Ram
Friday, November 11, 2011 -
2:30pm to 3:30pm
Watson Library, Room 300

Across disciplines, scientific inquiry has been moving from relatively “static” representations of entities and phenomena to more “dynamical” ones. As the articulation of developmental theory has progressed, researchers have demanded more and more powerful research designs, measurement procedures, and analytical techniques for examining how and when individuals change over time. Technological advancements have facilitated innovation of sophisticated models capable of describing and testing many types of linear and nonlinear changes. Through a series of empirical examples covering a wide range of psychological inquiries, we suggest three simple considerations that might facilitate the movement towards theories, analytical methods, and data collection designs that more closely articulate our field’s process-oriented ideals.

Watch Seminar

Weekly Colloquium

Modeling Nonlinear Change via Latent Change and Latent Acceleration Models: Examination of Rates of Change and Acceleration in Latent Growth Curve Models

Kevin Grimm, University of California-Davis
Thursday, November 10, 2011 -
4:30pm to 5:30pm
Watson Library, Room 455

We propose the use of the latent change and acceleration frameworks for modeling nonlinear growth in structural equation models. The latent change and acceleration frameworks provide direct information regarding the rate of change and acceleration for latent growth curves – information not directly available through traditional growth curve models when change patterns are nonlinear with respect to time. Exponential growth models in the three frameworks are fit to longitudinal reaction time data from the Math Skills Development Project to illustrate their use and the additional information gained.

Watch Seminar

Weekly Colloquium

Analyzing Large-Scale EMA Data from a Person-Specific Perspective: Pushing Intraindividual Variability into a ‘Real Time’ World

Nilam Ram, The Pennsylvania State University
Thursday, November 10, 2011 -
1:00pm to 2:00pm
Watson Library, Room 455

Ecological momentary assessment, experience sampling, and diary data streams can be used to measure and model a wide variety of dynamic characteristics and processes. We review and demonstrate how measurement burst designs and univariate summary statistics, multilevel models, and multivariate state-space models are being used to examine between-person differences in intraindividual variation, covariation, and systems constructs (e.g., emotional lability, stress reactivity, socio-emotional complexity). We then explore how emerging technologies and nascent connections with social network analysis (SNA), geographic information analysis (GIA), and data mining frameworks may allow us to move our analytical methods ‘from bench to bedside’ through delivery of individually tailored interventions in real-time.

Watch Seminar

Weekly Colloquium

Translating Training in Assessment Development to Real World Experiences

Marianne Perie, Center for Assessment
Friday, November 4, 2011 -
3:00pm to 4:00pm
Watson Library, Room 300

Dr. Marianne Perie will describe her experiences in 18 years working in educational assessment. In graduate school, we learn about item design, scoring, IRT, item calibration, equating, standard setting, and validity evaluation. However, in the real world, we work with politicians who come and go, timelines that don't match best practice, and legislation that requires the technically impossible. Dr. Perie will discuss the contrast between textbook best practice and how we often make testing programs work in the real world. The result is creative psychometrics that we can stand by but which we were never taught in school.

Weekly Colloquium

Dynamic Relations in Early Communication Skills’ Growth Trajectories of Infants and Toddlers

Rawni Anderson, Juniper Gardens Children's Project, University of Kansas
Friday, October 28, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

The Early Communication Indicator (ECI) is one of a growing class of general outcome measures (GOM) emerging in early education and early childhood special education designed to identify young children not making expected progress, plan changes in the intensity of early intervention, and monitor individual child progress given a change in intervention. Results of previous research evaluating the ECI are consistent with widely accepted theories of language suggesting that simpler elements of communication precede more complex language development. Findings support the hypothesis that dynamic relations exist within and between ECI key skill elements of communication that may inform benchmarks and decision making related to early intervention in the development of communication proficiency. The proposed research expands upon previous findings by modeling growth in infants’ and toddlers’ early expressive communication; more specifically, the present study aims to identify predictive relations within and between participants’ status (intercept) and growth (slope) in early communication key skill elements and total communication proficiency. Piecewise linear growth modeling is particularly suitable for the present objective, given that variable rates of change in particular key skill elements across time may inform periods of heightened sensitivity to targeted intervention.

Weekly Colloquium

Evaluating Factorial Invariance Across Continuous Sampling Dimensions in a Single-Group Analysis

Daniel Bontempo
Friday, October 21, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Dr. Daniel Bontempo, Co-Director of the Analytic Techniques & Technology Core of the KU center for Biobehavorial Neurosciences in Communication Disorders, will present some recent work on measurement models testing factorial invariance (FI) across levels of a continuous covariate. We demonstrate the validity of moderated FI by showing that the special case of the continuous dummy code (0/1) used as a moderator will produce the same model fit and parameter estimates as the two-group CFA procedures for factorial invariance. We provide simulated data and annotated MPLUS codes to specify both moderated and two-group invariance models. Finally, we provide some discussion of the outstanding research agenda and/or potentially problematic assumptions of the moderated FI approach.

Weekly Colloquium

PhD The Movie

Woodruff Auditorium at the Kansas Union
Friday, October 7, 2011 -
12:00pm to 1:00pm
Watson Library, Room 455

The KU School of Education Center for Educational Testing and Evaluation will be sponsoring a showing of PhD The Movie (http://www.phdcomics.com/movie/) on Friday October 7th at 3:30 PM at Woodruff Auditorium at the Kansas Union. Admission is free for all doctoral students, former doctoral students, potential doctoral students, and people too wise to be doctoral students. Undecided people are also invited.

This movie is only being distributed at university campuses and will be your only chance ever, ever, ever to see this cultural phenomenon.

Weekly Colloquium

Comparing Logit and Probit Coefficients between Models and Across Groups

Richard Williams, University of Notre Dame
Friday, September 23, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Social Scientists are often interested in seeing how the effects of variables differ between models or across groups. For example, a researcher might want to know whether the estimated effect of race on some outcome declines once education is controlled for, or whether the effect of education is greater for whites than it is for blacks. In OLS regression with continuous dependent variables, such issues are often addressed by estimating sequences of nested models and/or by including interaction terms in the analysis. Unfortunately, these same approaches can be highly problematic when binary and ordinal dependent variables are analyzed via probit or logistic regression. Naïve comparisons of coefficients between models or across groups can indicate differences where none exist, hide differences that do exist, and even show differences in the opposite direction of what actually exists. This talk explains the problems and discusses the strengths and weaknesses of various proposed solutions, including Y-standardization (Winship & Mare, 1984), heterogeneous choice models (Allison 1999; Williams 2009 & 2010) and group comparisons using predicted probabilities (Long 2009).

Weekly Colloquium

Using Stata’s Margins Command to Estimate and Interpret Adjusted Predictions and Marginal Effects

Richard Williams, University of Notre Dame
Thursday, September 22, 2011 -
4:00pm to 5:00pm
Watson Library, Room 455

Many journal articles go on at great length about the sign and statistical significance of effects, but often there is very little emphasis on their substantive significance. Particularly with nonlinear models like logistic regression, it can be very difficult to get a practical feel for what the results mean. For example, analyses might show us that blacks are less likely to be hired than are otherwise-comparable whites, but are they, on average, one percent less likely, twenty percent less likely, or what? In this talk, I show how the use of adjusted predictions and marginal effects can make the meaning and importance of results much clearer. I illustrate how the margins command, introduced in Stata 11, makes such calculations straightforward, and is generally far superior to commands that preceded it, like adjust and mfx. I further show that margins can estimate MEMs (marginal effects at the means), AMEs (Average Marginal Effects) and MERs (Marginal Effects at Representative Values), and discuss pros and cons of each approach.

Weekly Colloquium

Body, Brain, & Behavior: The Effects of Physical Activity on Brain & Cognition in Older Adults

Amber Watts
Friday, September 16, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Dr. Watts will describe three studies from her research investigating the relationship between physical activity, brain health, and cognitive performance in older adults with and without dementia. The first study describes difficulties in measuring physical activity in relatively sedentary older adults with and without dementia. She will introduce her grant proposal to improve measurement approaches in this population. The second study investigates the relationship between lean body mass and brain volume loss in older adults with dementia and the potential role of physical activity. Finally, she will describe a bivariate longitudinal model that attempts to unravel causal direction in the relationship between physical activity and reasoning performance over time in healthy older adults. The model addresses the question of whether physical activity leads to better cognitive function or whether individuals with healthy cognitive function are more likely to be physically active.

Weekly Colloquium

No Need to be Discrete: A Method for Continuous Time Mediation Analysis

Pascal Deboeck
Friday, September 9, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

The concept of mediation is one that has shaped the theories put forth by numerous researchers. In the last two decades, however, the list of problems associated with mediation models has been growing. It has been shown that mediation models based on cross-sectional data can produce unexpected estimates, so much so that trying to make longitudinal and causal inferences from cross-sectional mediation models is inadvisable. Even longitudinal mediation models are not without faults, as the results produced by these models are specific to the lag between observations; this has led to much discussion about the selection of appropriate lags. A few researchers have suggested that problems with longitudinal mediation models might be ameliorated through the use of continuous time mediation models, rather than the discrete time models commonly used. We demonstrate methodology that can be used for continuous time mediation analyses. Simulated examples and a reanalysis of a published covariance matrix are used to demonstrate: 1) that continuous time models can be fit to the same types of data already being collected for longitudinal mediation studies, 2) the additional information that can be gained through continuous time analyses, and 3) the effect of one construct on another can be understood independent of the lag selected for data collection.

Weekly Colloquium

Bayesian Imputation for Meta-Analysis of Degraded Effect Sizes

Adam Hafdahl, ARCH Statistical Consulting, LLC
Friday, September 2, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Meta-analysts who synthesize aggregate data from primary studies often confront "degraded" estimates of effect size (ES): The author has reported only sample size(s) and either the ES estimate's direction, 1 or 2 p-value boundaries for a relevant hypothesis test (e.g., "p > .05" or ".01 < p < .05"), or other coarse information (e.g., "p = .4"). I propose an easy, broadly applicable Bayesian technique to include a degraded ES estimate in conventional meta-analytic procedures: Obtain the mean and variance of the ES parameter's posterior distribution, given the degraded result, and find the (non-degraded) ES estimate and conditional variance (CV) that yield the same posterior mean and variance. The latter imputed ES and adjusted CV may be meta-analyzed just like their non-degraded counterparts. Given a prior distribution on the ES parameter, we can obtain the posterior moments by simple quadrature or simulation techniques. We can apply this Bayesian ES imputation (BESI) strategy to a variety of ES metrics and analyze its imputed values with numerous meta-analytic procedures, including those with between-studies variance components or moderators. Monte Carlo studies suggest that BESI performs well compared to other simple strategies, such as imputing a near-0 boundary or 0 for an ES estimate reported as significant or not, respectively, or omitting degraded ESs-- especially when ES degradation depends on the ES (i.e., degraded not at random) -- and can be used in more situations than vote counting.

Weekly Colloquium

Fit Index Sensitivity in Multilevel Structural Equation Modeling

Aaron Boulton
Friday, April 29, 2011 -
4:00pm to 5:00pm
Watson Library, Room 455

A key feature of structural equation modeling (SEM) is the ability to assess the goodness of fit of a theoretical model to data. The performance of fit indices in single-level SEM has been an active area of research for the past 30 years. However, little is known about the application of these indices to MSEM. I discuss model fit evaluation in MSEM and report results from a small simulation study of fit index sensitivity.

Weekly Colloquium

Is Ignoring Multilevel Structure Ever Justified in Confirmatory Factor Analysis?

Sunthud Pornprasertmanit and Jaehoon Lee
Friday, April 29, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Researchers often ignore the multilevel structure of their data and use methods designed for use with single-level data. We first address the consequences of ignoring the nested nature of data in general and then in confirmatory factor analysis (CFA) specifically. With a simulation study, we show under what conditions applying single-level CFA to multilevel data leads to significant bias in both parameter estimates and standard errors.

Weekly Colloquium

Reliability Estimation in a Multilevel Confirmatory Factor Analysis Framework

G. John Geldhof
Friday, April 29, 2011 -
2:00pm to 3:00pm
Watson Library, Room 455

Social science researchers largely acknowledge that two-stage sampling produces reliable scale variance within and between clusters. A growing body of literature has been dedicated to multilevel hypothesis testing (e.g., multilevel regression), yet little work has examined level-specific scale reliability when testing multilevel hypotheses. I present a MSEM approach to level-specific reliability estimation and discuss the consequences of ignoring multilevel data structures.

Weekly Colloquium

A Multilevel SEM Strategy for Examining Dyadic Correlations

Mijke Rhemtulla and Alexander M. Schoemann
Friday, April 29, 2011 -
1:00pm to 2:00pm
Watson Library, Room 455

Social scientists frequently study variable relations at the dyad and individual levels. Several techniques have been proposed to decompose correlations into dyadic and individual components; however, these methods tend to be multi-stage and cumbersome. We use MSEM to efficiently decompose variable relations into dyad- and individual-level components, resulting in accurate standard errors and precise estimates of intraclass correlations, as well as variable relations.

Weekly Colloquium

Multilevel Structural Equation Modeling

Kristopher J. Preacher
Friday, April 29, 2011 -
12:00pm to 1:00pm
Watson Library, Room 455

Nested data (e.g., children nested in schools, repeated measures within people) require statistical procedures that are able to address the dependencies induced by such data. Multilevel structural equation modeling (MSEM) has emerged as an inclusive, flexible modeling framework for addressing causal and correlational hypotheses involving nested data. In this symposium, members of the MSEM Workgroup showcase several new directions for MSEM research.

Weekly Colloquium

Predictive Margins for Group Comparisons in Logit and Probit Models

J. Scott Long, Indiana University
Friday, April 22, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Group comparisons in regression models for binary outcomes are complicated by an identification problem that is inherent in these models. Traditional tests for the equality of independent variables’ coefficients across groups confound the magnitude of the regression coefficients with residual variation. Previous attempts to deal with this problem have developed new tests that, unfortunately, require additional onerous assumptions about the structure of the similarities and differences that exist across the groups. This talk suggests an alternative approach, based upon predicted probabilities. The latter are not affected by residual variation. Furthermore, comparisons of predicted probabilities can be made across groups without requiring a priori assumptions about the values of the regression coefficients. This approach to interpretation is related to general issues of identification in models for binary outcomes.Using predicted probabilities require researchers to think carefully about making comparisons across groups. Tests for the equality of predicted probabilities require multiple comparisons since group differences in predictions vary with the levels of the variables in the model. This, in turn, usually leads to more complex conclusions about intergroup differences in the effects of the independent variables.

Weekly Colloquium

Discrete Hidden Markov Models and Their Statistical Properties

Fengmei Wu
Friday, April 15, 2011 -
4:00pm to 4:30pm
Watson Library, Room 455

In my topic, Discrete Hidden Markov Models (DHMMs) and Extended Discrete Hidden Markov models will be introduced, the methods to estimate the parameters of the models, as well as the statistical properties of parameters will be investigated.

Weekly Colloquium

Say Goodbye to Cutoffs: Using Empirical Sampling Distributions of Fit Indices to Determine Appropriate Level of Model Fits in Structural Equation Modeling

Sunthud Pornprasertmanit
Friday, April 15, 2011 -
3:00pm to 3:45pm
Watson Library, Room 455

To evaluate model fit of structural equation modeling (SEM), researchers usually use pre-specified cutoffs of particular fit indices. The cutoffs are usually established based on experience or simulation studies (e.g. Hu & Bentler, 1999( conducted on a limited range of models (e.g., measurement model). As a result , the cutoffs are not always appropriate for a tested model. This presentation introduces a simulation based method that allows the researchers to tailor the cutoffs to their target models so that model fit can be evaluated with increased accuracy and flexibility. The proposed method can be also extended to compare nested and nonnested models, as well as conduct power analysis to draw on theory in the procedure of model fit and selection so that model misspecification can be interpreted with careful theoretical consideration.

Weekly Colloquium

Getting the Most Out of Your Power Simulations in Latent Variable Models

Stas Kolenikov, University of Missouri
Friday, April 8, 2011 -
3:00pm to 4:00pm
Watson Library, Room 300

In this talk, I will demonstrate how statistical theory can guide simulations aimed at determining the power of tests in structural equation models (SEMs). I will pick a few important findings from the existing literature on power analysis in SEM, and demonstrate how these pieces of information can be used to effectively set up and analyze power simulations. By using a large number of settings, including varying sample sizes, degrees of non-normality and the magnitude of model misspecification, I demonstrate how both large sample and small sample effects affect the performance of the (quasi-) likelihood ratio statistic and the quality of the non-central chi-square approximation. The proposed framework allows for flexible simulations with minimal computational requirements.

Weekly Colloquium

Exploratory Factor Analysis with Ordinal Data

Guangjian Zhang, University of Notre Dame
Friday, March 18, 2011 -
3:00pm to 4:00pm
Watson Library, Room 300

Applications of exploratory factor analysis often involve ordinal data like Likert variables. Factor analyzing the product moment correlation matrix of ordinal data is inappropriate, because the fact that ordinal data are discontinuous is ignored. A popular alternative is to assume that a continuous variable underlies each ordinal variable. The correlation between two underlying continuous variables is referred to as a polychoric correlation. In this talk, we consider ordinary least squares estimation of the exploratory factor analysis model with a polychoric correlation matrix. In particular, we present a procedure estimating asymptotic standard errors of obliquely rotated factor loadings and factor correlations. The procedure will be illustrated using an empirical study and a simulation study.

Weekly Colloquium

Ordinal Multiple Regression and Extensions

Carol Woods
Friday, March 11, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

Dominance-based ordinal multiple regression (DOMR; Cliff, 1994, 1996) is a model for answering questions about ordinal relationships between one ordinal outcome and one or more ordinal predictor that is based on tau-a (Kendall, 1938). In this talk, I will describe DOMR, distinguish it from the more popular proportional odds model, and describe research in progress for extending it to other ordinal measures of association.

Weekly Colloquium

The Big Picture of Latent Variable Analysis: Connecting the Dots from IRT to SEM

Emily Fall
Friday, February 25, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

The world of latent variable modeling is so complex and diverse that researchers often - sometimes out of necessity - specialize in methods such as IRT or SEM and lose sight of the bigger picture. The purpose of this talk is to examine several of the 'special cases' of the general(ized) linear model and draw connections between the methods, their similarities and difference, and their specific applications. Special attention will be paid to latent class analysis.

Weekly Colloquium

Issues in the Development of a National Through-Course Assessment Program

Dianne Henderson Montero, Educational Testing Services, Inc.
Friday, February 18, 2011 -
3:00pm to 4:00pm
Watson Library, Room 300

The US Department of Education has funded two consortia to develop assessments associated with the Common Core Stadards. The goal of the two consortia that were funded is to create an assessment that takes advantage of emerging technology to provide valuable feedback to teachers, students and parents using a misture of item types, new delivery modes and scoring models. A key component of both consortia is the inclusion of collecting information throughout the year, although the PARCC consortia has specifically identified that they plan to collect information that will contribute to the final score. These "through-course" assessments persent unique measurement challenges beginning with first articulating the design of the assessment and including administration, scoring of responses and creating the final score. This paper will outline some of these challenges and describe some potential solutions that the consortia could consider, recognizing that the eventual solution will necessarily need to be a compromise dependings on the goals of the stakeholders that are of primary importance.

Weekly Colloquium

What Does the Value of the RMSEA Really Tell us About the Amount or Type of Model Misspecification in CFA Models?

Victoria Savalei, The University of British Columbia
Friday, February 4, 2011 -
3:00pm to 4:00pm
Watson Library, Room 455

The fit index RMSEA is extremely popular in structural ewuation modeling (SEM). Popular guidelines suggest that RMSEA<.05 or .06 indicate "good" model fit. However, the relationship between RMSEA and various types of model misspecification remains poorly understood. Previous studies have mostly focused on studying the behavior of the sample RMSEA in a few selected conditions. The present study focuses on the population RMSEA. Understanding population bhavior is an essential step before the influence of sampling fluctuations can be considered. The present study also uses a novel approach of generating continuous curves to fully capture the relationship between RMSEA and the size of the omitted parameter or other model feature. The context is confirmatory factor analysis (CFA) models. Many new and intriguing findings emerge. Some of these are as follows: when it comes to detecting omitted residual correlations or moitted cross-loadings, RMSEA is sensitive to the size of the factor loadings yet fairly insensitive to the size of the factor correlations; RMSEA can be a non-monotonic function of the number of indicator; RMSEA has a curvilinear relationship with the number of omitted crossloadings; and RMSEA is generally not able to distinguish among models with different underlying numbers of factors.

Weekly Colloquium

Reframing and Extending Traditional Social Science Statistics Using a Likelihood/Information Paradigm: The Illustrative Case of Analysis of Variance

Greg Hancock, University of Maryland
Friday, December 3, 2010 -
1:00pm to 2:00pm
Watson Library, Room 300

For those who have acquired training in modern modeling methods, such as structural equation, latent class, or latent trait modeling, one notices that methodological practice differs considerably from that of the training in the traditional general linear modeling methods that comprise the foundation of most social science statistical training. These differences are in terms of, for example, the specification of models, the articulation of competing models, the role of researcher judgment, and the generally evidentiary nature of the modeling process. If such a paradigm is appropriate for these more sophisticated modern modeling methods, why then are they not used within the more foundational analytical scenarios? The current presentation will argue that indeed they should be, and will utilize the case of analysis of variance to illustrate how traditional social science statistical methods can be reframed and extended, thereby signifying a paradigm shift in methodological practice as well as methodological thinking.

Weekly Colloquium

The Simplexity of Sexual Submission

Steve Short
Friday, November 19, 2010 -
3:30pm to 4:00pm
Watson Library, Room 455

Simplex models are an extension of structural equation modeling (SEM) that allow the researcher to test for gradual, yet consistent change across constructs. The simplex structure has a variety of applications, including longitudinal designs, or in the examination of the psychometric properties of a measure. The present talk will begin with an overview of simplex models and then apply the presented principles to a newly developed sexual fantasy measure. A simplex model demonstrating gradual change from preference for neutral sexual fantasy vignettes to vignettes with increasing themes of sexual dominance or sexual submission will be presented. Finally, this new scale with a validated high dominant to high submissive preference continuum will be compared to previous methods of measuring individuals’ sexual fantasy preferences.

Weekly Colloquium

Testing Screw-ups

Howard Wainer, Wharton School of the University of Pennsylvania
Friday, November 12, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

In any large enterprise, errors are inevitable. Thus the quality of the enterprise must be judged not only by the frequency and seriousness of the errors but also by how they are dealt with once they occur. In this talk I will discuss three testing screw-ups that span the range from errors in test scoring to errors in score interpretation and usage. 'The first example comes from a scoring error that NCS Pearson, Inc., under contract to the College Entrance Examination Board made on October 8, 2005 in an administration of the SAT Reasoning test. The second example is from a September 2008 report published by the National Association for College Admission Counseling in which one of the principal recommendations was for colleges and universities to reconsider requiring the SAT or the ACT for applicants. The third example derives from the results on a third grade math test at an Elementary School in the north-eastern United States, where a teacher was suspended without pay because her class did unexpectedly well. All the examples illustrate the fundamental principle, annunciated by the 12th century philosopher Moses Maimonides, "If you think doing it right is expensive, try doing it wrong."

Weekly Colloquium

Impulsivity, latent variable modeling and the advancement of pyschological theory

Steve Reise, University of California-Los Angeles
Friday, October 29, 2010 -
3:00pm to 4:00pm
Watson Library, Room 300

Using the umbrella concept of "impulsivity" Dr. Reise considers the role of latent variable modeling (growth, mixture, structural equations, and item response theory) and related techniques as he takes us from gene to (endo)phenotype, and reviews the type of data collection and quantitative analyses needed to address key substantive questions. Among the key messages of the talk are that: a) interdisciplinary interchange is critical to future advancements in psychology, and, most importantly, b) training in measurement theory is needed now more than ever. The purpose of the presentation is to convince you of these points and to challenge, excite, and invigorate the next generation of quantitative research as it takes on the challenges of the 21st century.

Weekly Colloquium

Choosing among misspecified models when trying to study causal pathways in daily diary studies

Patrick Shrout, New York University
Friday, October 22, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Multilevel models provide important tools for studying causal hypotheses using daily diary studies, but such studies often omit variables that are known to be related to the dynamic processes of interest. To adjust for omitted variables, we often model the covariance structure of the residuals. In this talk I review various approaches toward finding an adequate model, and show that different conventional approaches can give dramatically different results. I illustrate the methodological issues with an analysis of how daily conflict in intimate couples affects subsequent relationship closeness.

Weekly Colloquium

Compare Different Reliability Calculation Methods in Testlet Based Computerized Adaptive Tests

Wenhao Wang
Friday, October 8, 2010 -
4:00pm to 5:00pm
Watson Library, Room 455

In testlet based computerized adaptive tests, testlets are administrated to students according to their ability levels. Thus, students do not take the same items. They just take part of the items in the item bank. In those circumstances, some new reliability calculation methods are used. They are sparse matrix reliability, marginal reliability and estimated error reliability. This study compares those new methods in testlet based computerized adaptive tests with both the simulation and the real data. The results present the reliabilities under different testlet based computerized adaptive tests environment. The results suggest that the sparse matrix reliability works well when the testlet selection method is not related to the item variance. Marginal reliability and estimated error reliability are higher when adaptive testlet selection method rather that other testlet selection methods is applied.

Weekly Colloquium

Instrumental Variables?

Aaron Boulton
Friday, October 8, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

An instrumental variable (IV) is a special type of variable used in regression analysis to address a variety of estimation-related problems. The use of such variables is routine in fields such as Economics, but little is known about them in other social science disciplines. One of the more common functions of IVs is to over-identify a model in order to simultaneous estimate a set of regression equations. Given this function, IVs may provide several interesting applications in structural equation modeling, which has become a dominant paradigm for simultaneous equation estimation in the social and behavioral science. The goal of this presentation is to provide an introduction to IVs as used in regression analysis and discuss ways in which they may be effectively used in structural equation modeling.

Weekly Colloquium

Campaign Support, Conflicts of Interest, and Judicial Impartiality: Can Recusals Rescue the Legitimacy of Courts?

Jim Gibson
Friday, October 1, 2010 -
4:00pm to 5:00pm
Watson Library, Room 300

This paper investigates citizen perceptions of the impartiality and legitimacy of courts, focusing on a state (West Virginia) that has recently been a battleground for conflict over campaign support, perceived conflicts of interest, and loss of impartiality. We employ an experimental vignette embedded within a representative sample to test hypotheses about factors affecting perceived judicial impartiality. Perhaps not surprising is our finding that campaign contributions threaten the legitimacy of courts. More unexpected is evidence that contributions offered but rejected by the candidate have similar effects to contributions offered and accepted. And, although recusal can rehabilitate a court/judge to some degree, the effect of recusal is far from the complete restoration of the institution’s impartiality and legitimacy. The processes by which citizens form and update their opinions of judges and courts seem at least to involve pre-existing attitudes, expectations of judges, and perceptions of contextual factors.

Weekly Colloquium

Kim Gibson

Software Best Practices
Friday, October 1, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

As computing power becomes ever cheaper, more complex analytical tools are available for the research analyst. These statistical tools now often require knowledge of programming skills in addition to the ability to interpret the results. Based on over a decade of software engineering experience, I will discuss how software engineering best practices can help statisticians get to the best part - interpreting results - faster and with greater confidence in the accuracy of their programs.

Weekly Colloquium

Research on the Discrete Option Multiple Choice Item Type

Neal Kingston and Gail Tiemann
Friday, September 24, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

The Discrete Option Multiple Choice (DOMC) item type is a new computer delivered alternative to the traditional multiple choice item and its varieties. In the DOMC design, the computer presents an item’s stem and then presents the answer choices in a random sequence one at a time. As the computer presents each choice, the examinee must decide if the choice is the correct answer to the stem. The item ends, and the computer offers no further choices if the examinee correctly identifies the correct answer. The DOMC item type has the potential to reduce the impact of test wiseness, diminish the impact of coaching, and reduce the feasibility of effective cheating based on the memorization and sharing of items. Research with over 800 examinees answered these and other questions: Are the item difficulty and discrimination parameters the same for DOMC and traditional multiple choice items? Does confirmatory factor analysis support an item type factor? Does Differential Item Functioning analysis reveal gender differences associated with item type? Is there an actual or perceived improvement in test security?

Weekly Colloquium

Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

Li Cai, University of California-Los Angeles
Friday, September 17, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Model based multiple imputation has become an indispensable method in the social and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicate model fit testing, which is an important aspect of mean and covariance structure modeling. Extending the logic developed by Yuan and Bentler (2000), Cai (2008), and Cai and Lee (2009), we propose an alternative method for conducting multiple imputation based inference for mean and covariance structure modeling. In addition to computational simplicity, our method naturally leads to an asymptotically chi-squared model fit test statistic. Using simulations we show that our new method is well calibrated, and we illustrate it with analyses of two real data sets. A SAS macro implementing this method is also provided.

Weekly Colloquium

General Introduction to Missing Data

Kyle Lang
Friday, September 10, 2010 -
4:00pm to 5:00pm
Watson Library, Room 455

Incomplete data is one of the most prevalent roadblocks to unbiased and efficient estimation of parameters that the applied researcher faces in his or her daily life. Over the years many techniques have been variously developed to deal with the problem, with greater or lesser degrees of performance. This talk will give a short introduction to the missing data problem, discuss the assumptions of the missing data model, and introduce the three most prevalent algorithms employed to handle incomplete data in substantive data analysis, namely, the expectation maximization algorithm, multiple imputation, and full information maximum likelihood.

Attachments: 
Weekly Colloquium

What Does the Value of the RMSEA Really Tell us About the Amount or Type of Model Misspecification in CFA Models?

Paul Johnson
Friday, September 10, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

A complex system is a model in which there are many "loosely interconnected" agents, individual actors who are autonomous and yet effect each other. The theory of complex systems is a set of more-or-less standard models of surprising outcomes that can be observed in a set of well-traveled example models. These models were originally developed for physical and ecological systems, but they have obvious application to social science modeling projects in which we conceive of people as more-or-less interdependent agents. The computer models that are used to explore these ideas are called agent-based simulation models. This talk will be a survey of important ideas that have motivated the growth of agent-based computer simulation modeling during the past 15 years or so. It will provide illustrations via some examples that are written in Objective-C with the Swarm Simulation System. Some of the examples that are considered are famous ones, like Thomas Schelling's model of neighborhood racial segregation or the psychological model of opinion formation known as Latane's Social Impact Model. Other examples will consider simulations of elections and markets.

Attachments: 
Weekly Colloquium

Fun with Learning Maps

Sylvia Tidwell Scheuring
Friday, September 3, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Online assessment, in its infancy, is likely to facilitate a variety of innovations in both formative and summative assessment. This talk will focus on the potential of online assessment to accelerate learning via effective links to instruction. A case is made that detailed learning maps of academic progress are especially conducive to effective skill and concept diagnosis and prescriptive learning, contributing construct validity and precision to assessment results and coherence to instructional interventions. Item adaptive testing using learning maps and the paradigm of intelligent agents is discussed in the context of a vision of a seamless integration of assessment and instruction for students at all ability levels.

Weekly Colloquium

Interpretable Reparameterizations of Growth Curve Models

Kristopher Preacher
Friday, August 27, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

One of the primary goals of longitudinal modeling is to estimate and interpret free model parameters that reflect meaningful aspects of change over time parsimoniously. In many cases, a linear or nonlinear model may be sensible from a theoretical perspective and may fit the data well, yet may have parameters that are difficult to interpret in a meaningful way. Such models often may be reparameterized to yield statistically equivalent models with more easily interpretable parameters. We address various ways in which reparameterization may be used in the context of latent growth curve modeling (LGM), a powerful and flexible modeling framework often used to model trends in longitudinal data, as well as individual difference in those trend.

Weekly Colloquium

Estimating Three-way Interactions in Structural Equation Modeling

Alex Schoemann
Friday, April 30, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

There are several techniques to estimate interactions between two latent variables in Structural Equation Modeling (SEM). This work extends three of these techniques, orthogonalizing (Little, Bovaird & Widaman, 2006), unconstrained mean centering (Marsh, Wen, & Hau, 2004) and LMS/QML (Klein & Moosbrugger, 2000) to interactions between three latent variables.

Weekly Colloquium

Using Person-Fit Statistics to Detect Cheating on Examinations

Mike Clark
Friday, April 23, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Person-fit statistics are used to assess how congruent an individual's response string is with an overall measurement model. Detecting aberrant response patterns in data has a host of useful applications, but the focus of this talk will be in the context of cheating detection. Many person-fit statistics have been developed from a variety of measurement perspectives, including classical test theory, factor analysis, and item response theory. A few of the more popular current methods will be discussed, followed by future directions in this area of research.

Weekly Colloquium

Mediation in Multilevel Structural Equation Modeling: Current Work and Future Directions

Emily Fall
Friday, April 16, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Multilevel Structural Equation Modeling (MSEM) is the intersection of two dominant analytical techniques and is a rapidly growing area of research. Many of the models currently analyzed with MLM or SEM can be analyzed more appropriately within the MSEM framework, although many specific model fit and estimation questions still require further investigation. Of particular interest for this talk is mediation analysis in MSEM. Current work as well as new questions will be addressed.

Weekly Colloquium

Conceptualizing and Modeling Contextual Effects in Longitudinal Studies

Todd Little
Friday, April 9, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

I present a broad conceptual framework for thinking about contextual effects in longitudinal research. I discuss the different statistical models that can be used to represent such effects and I review a number of important design and measurement issues that relate to modeling contextual effects. This talk is meant for a broad social science audience.

Weekly Colloquium

High Stakes Testing - Where Psychometrics, Policy, and Politics Meet

Neal Kingston
Friday, April 2, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

High stakes testing occurs in a complex environment swirling with challenges. These challenges place ever increasing demands on the processes and psychometric underpinnings of such testing programs. Economic constraints lead test sponsors to demand conflicting goals for single testing programs. In this talk Neal Kingston will discuss the policy and political landscape, how to unravel the conflicting goals, and appropriate procedures for developing and using high stakes accountability tests. Some current research questions will be raised in the discussion.

Weekly Colloquium

Modeling Democratization: Methodological Issues in Quantitative Small-N Macro-Comparative Research

Robert Hughes
Friday, March 12, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

The Bayesian analytical framework presents several possibilities for small-N comparative historical research. Yet, while the advantages of smaller sample size requirements and modeling flexibility offered by WinBUGS allow for the possibility of small-N quantitative analysis, more investigation is required concerning model fit statistics for Bayesian models. Further guidelines concerning the use of informed priors would also be helpful. In relation to these two issues, I discuss the possibility, and practical implications of, using Gibbs sampling and MCMC methods with uniformed priors in small-N research situations. terized adaptive tests environment. The results suggest that the sparse matrix reliability works well when the testlet selection method is not related to the item variance. Marginal reliability and estimated error reliability are higher when adaptive testlet selection method rather that other testlet selection methods is applied.

Weekly Colloquium

Time Moderated Effects Using Latent Growth Curve Models

James Selig, University of New Mexico
Friday, March 5, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Latent growth curve models (LGCM) can be used to examine how individuals change over time. These models yield parameter estimates (e.g., intercept and slope) that describe the average pattern of change for a group of individuals. Often, however, it is substantively interesting to determine whether individual differences in these values: 1) predict later person-level outcomes, or 2) are predicted by person-level characteristics. What is not often recognized is that many such effects are moderated by time because the magnitude of the effect will depend upon the time scale for the LGCM. The purpose of this presentation is to explore the issue of time moderated effects in LGCMs. An empirical example using maternal depressive symptoms and children's problem behavior will be used to illustrate such time moderated effects.

Weekly Colloquium

ROC Benchmark Decision-making

Waylon Howard and Rawni Anderson
Friday, February 26, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

A number of methods exists for summarizing the predictability and diagnostic accuracy of screening assessments. Receiver Operating Characteristic (ROC) curve analysis is used to summarize the global diagnostic accuracy of a screening instrument with respect to some standardized reference assessment with known validity. Evaluating the predictive and diagnostic utility of the Early Communication Indicator (ECI) to identify infants and toddlers demonstrating language delay as measured by the Preschool Language Scale--Fourth Edition (PLS-4) is the goal of this research. Methods and results will be discussed.

Weekly Colloquium

Model Fit in SEM and CFA: Re-Examining Traditional Rules of Thumb

John Geldhof
Friday, February 19, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Indices of model fit inform researchers’ decisions regarding the adequacy of their confirmatory factor models, an important step in conducting SEM. While intuition and personal experience have led some researchers to suggest rules of thumb for common fit indices, a series of reports and papers in the late nineties (Hu & Bentler, 1997, 1998, 1999) revolutionized the way goodness of fit indices were approached. Hu and Bentler’s results indicated that previously held rules of thumb (e.g., CFI > .90) are overly liberal and new cut-offs were suggested. Hu and Bentler’s suggestions were based on an overly restrictive definition of acceptable model fit, however. This talk presents the results from a broader simulation that discusses when traditional vs. Hu and Bentler’s alternative cut-off criteria are most appropriate.

Weekly Colloquium

Mplus Programming

Amber Watts
Friday, February 5, 2010 -
3:00pm to 4:00pm
Watson Library, Room 455

Dr. Watts will discuss the basics of using Mplus software including importing data, key elements of Mplus code, trouble shooting models, advantages & disadvantages of using the program. She will give examples of programming several types of advanced models including: confirmatory factor analysis with mixed categorical and zero-inflated Poisson distributions and/or multiple groups, multi-level model, growth curve model, bivariate dual change score model.

Weekly Colloquium

CRMDA Calendar

Like us on Facebook
 
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times
Equity & Diversity Calendar

KU Today