Recruitment

Recruitment Status
Not yet recruiting
Estimated Enrollment
900

Summary

Conditions
Substance Abuse
Type
Interventional
Phase
Not Applicable
Design
Allocation: RandomizedIntervention Model: Parallel AssignmentIntervention Model Description: The study design involves a three-condition parallel randomized control trial contrasting: (1) SFP Online, (2) SFP Home-use DVD/video series, and (3) Wait-Listed Controls. A second design feature is the use of a non-inferiority trial (NIT) to empirically examine the efficacy of SFP Online when compared to the SFP Group Norms data. The Group Norms is a representative, demographically matched sample of n=1700 families drawn from a database of over 6,000 families that have attended the full 14-session traditional group format of SFP.Masking: None (Open Label)Primary Purpose: Prevention

Participation Requirements

Age
Between 11 years and 125 years
Gender
Both males and females

Description

This study tests a new web-based online parenting skills training and youth drug prevention program based on the evidence-based Strengthening Families Program. The study design involves a three-condition parallel randomized control trial contrasting: (1) SFP Online, (2) SFP Home-use DVD/videos, and ...

This study tests a new web-based online parenting skills training and youth drug prevention program based on the evidence-based Strengthening Families Program. The study design involves a three-condition parallel randomized control trial contrasting: (1) SFP Online, (2) SFP Home-use DVD/videos, and (3) Wait-Listed Controls. The intervention condition, SFP Online, is an interactive, multimedia condition testing a 10-session online program with interrelated tracks for parents and youth. Both tracks involve completion of three mini-lessons per week delivered online for 10 weeks. Each track includes a gaming portion to increase engagement and reinforce lesson content through stealth learning. The SFP Home-use DVD/video series is an 11-session program with the same content as the online version, but is delivered via an online video presentation or a DVD player. The Wait-Listed control condition receives emails with recipes over the same 10-week period; their youth receive riddles. At the conclusion of a 3-month follow-up period, the wait-listed controls receive the SFP Online intervention. Our statistical analysis addresses the differences between the online and Home-use DVD/video conditions. A second design feature is a Non-Inferiority Trial (NIT) to empirically examine the efficacy of SFP Online when compared to the SFP Group Norms data. The Group Norms, which serves as a benchmark of SFP effectiveness, is a representative, demographically matched sample of n=1700 families drawn from a database of 6,000 families who have attended the full SFP 14-session classes. Effects sizes, using the partial eta-squared statistic, will be compared between conditions for the major outcome measures. The effect sizes will be adjusted for demographics differences at the individual level as well as implementation factors. The margin of equivalence for the effect size comparison between conditions is set at 10% (the effect sizes for the SFP Online program outcomes will be at least 10% larger than the corresponding effect sizes for the SFP videos or Group Norms). RECRUITMENT: The investigators will recruit 240 families with at least one parent and youth (n=480 participants), with 80 families randomly assigned to each experimental condition. Recruitment is handled by 24 SFP recruiters who sent Letters of Support. They are ethnically diverse and geographically dispersed across the U.S. They have access to multiple races/ethnicities and marginalized families. Each recruiter is targeting recruitment of 10 families (24 recruiters x 10 families). ASSIGNMENT: Individual families will be randomly assigned to one of three interventions with a computerized random number generator to eliminate selection bias and control unobserved confounding variables that contribute to program outcomes. We will examine the distribution of relevant demographic variables (race/ethnicity, income, education, adult and child age) to assure balance between conditions at baseline. We conduct analyses to detect significant program differences in intervention effects by these variables. STATISTICAL ANALYSIS: Prior to evaluating program effects, we examine pretest equivalence between experimental conditions using appropriate tests for categorical or continuous baseline measures. Significant differences are controlled in covariate-adjusted models evaluating program efficacy. This analysis uses a 3 (condition) x 2 (baseline risk) ANOVA with continued presence in the study as the dependent variable. We also examine whether there is any differential attrition by condition (i.e., coding attrition as 1/0 and using logistic regression models to predict retention). We also use confirmatory techniques to contrast variance/ covariance matrixes between conditions to assess whether patterns of statistical relations differ by condition. Differential attrition by condition could potentially limit internal validity (intervention effects might be confounded with attrition effects) or external validity (generalizability to other families). Data analyses begins by generating simple descriptive statistics on all baseline measures (e.g., frequencies, means, variance), repeating this procedure for the posttest and the 3-month follow-up data. Distributions are examined for skewness and kurtosis and response patterns. Chi-square tests, point bi-serial and Pearson correlations are used to examine Statistical relations between baseline predictors (i.e., checking for multicollinearity), demographics, site characteristics, and outcome variables will be assessed using appropriate measures of association and evaluated for statistical significance using chi-square, t, and F tests). Violating assumptions of normality and homoscedasticity (often encountered in family research and when assessing youth self-report drug use) will require data transformations (e.g., logarithmic, arcsine, spline, or percentile weighting). If the transformations fail to normalize the data, we can use nonparametric tests (e.g., Kendall's Tau, Mann-Whitney U-test, Wilcoxon Rank Sum, Spearman's correlation, and Freidman's ANOVA). For non-Gaussian distributions that arise from low frequency behaviors we can employ Poisson regression, zero-inflated, and two-part semicontinuous models to adjust for highly skewed distributions with a preponderance of zeroes. MISSING DATA. There may be some participants that fail to take the outcome evaluation surveys, stop participating in the intervention or are reluctant to provide data. These occurrences represent different faces of attrition (withdrawal and poor adherence) and must be examined carefully. We assume that data is missing completely at random (MCAR), or that the mechanisms causing the missing data are "ignorable" with no bias in the condition comparison; however, we apply Little's test of MCAR for multivariate data to determine if missing data bias the results. Regardless of how it arises, the complete-data-methods used require using multiple imputation procedures to obtain efficient parameter estimates. We use Full Information Maximum Likelihood estimation to recover the missing data. A reasonable number of imputations is 10 given the efficiency of imputation with relatively low levels of missingness. The 10 imputed datasets are then combined using multiple inference procedures that adjust standard errors for missing data uncertainty142 producing more efficient parameter estimates. INTENTION TO TREAT. Noncompliance arises because participants may fail to finish all the lessons, take none at all, or only take the assessments without being exposed to the lessons. This makes those who complete the lessons no longer a "random sample of the original group." Intention-to-treat (ITT) is a strategy for analyzing data in which participants remain in the experimental condition they were assigned regardless of their lesson completion or non-compliance status. Using sensitivity analyses, the data from completers is then contrasted to a dataset with cases excluded post-randomization because they were not exposed to the entire curriculum (efficacy subset analyses). This approach assumes that data from censored cases are missing at random (MAR) and the outcomes were obtained in an unbiased manner. Sensitivity analyses will compare outcomes based on different levels of lesson completion (relaxing the exclusion rule) with this procedure repeated leaving intervention exposure (number of sessions) continuous, as participants may benefit from any exposure. We will also use the complier average causal effects (CACE) estimation method to address differences in compliance with the intervention and its effects on program outcomes. Using CACE analysis, multiple groups (mixtures) are formed corresponding to their level of compliance (i.e., engagement) and treatment assignment (SFPOnline or SFP DVD/videos). Program outcomes are then contrasted between these mixtures using traditional logistic regression methods. We can further categorize compliers based on their observed lesson completion rates, yielding low- vs. high-compliers in each condition. Furthermore, engagement can be conditioned based on participant-level covariates (i.e., marital status, income, stress, drug use, or family size). ITT and CACE techniques keep the randomized design intact, mitigates bias from differential program completion, and enables us to make causal inferences about intervention effects in this sample. In the event that non-completing participants are not randomly dispersed among conditions, we will also report effect sizes among those who completed the lessons and took the Post Surveys. TESTS OF CONFOUNDING. Even with random assignment to experimental conditions and reduction in any pretest imbalances between them, confounding effects could remain a threat to validity (i.e., program effects could be biased). As a result, we will examine selected variables measured at baseline for evidence of any imbalance (i.e., pretest equivalence) and perform secondary analyses in which we model pretreatment covariates to account for between-group, outcome differences. This examination will be conducted using generalized linear models. Pretreatment differences can be participant specific (e.g., demographics). There is some evidence supporting inclusion of covariates in order to be able to causally attribute differences between the conditions to the intervention. Their inclusion increases model precision and boosts power and this efficiency holds for binary outcomes as well should we choose to model youth drug use in this manner. MANIPULATION CHECKS. We anticipate that certain relationships will be mediated by other variables. Our longitudinal study design allows us to make rigorous inferences regarding causal processes, addressing the putative mechanisms and treatment construct validity through which SFP works. We assume sequential ignorability even though the mediator is assessed post-randomization. We can statistically assess the decrement in magnitude of the direct effect, adjusted for the indirect effects. This case extends to longitudinal data with more than one putative mediator by the tracing rule for path analysis. The longitudinal design also provides temporal separation of the pretest, intervention (weeks 1-10), posttest (week 10-11), and the 3-month follow-up (week 22-23). We use the multivariate delta method to compute standard errors for mediated effects, which simulations have shown to produce accurate estimates based on bootstrap confidence intervals for the mediated effect. Potential mediators in these longitudinal models assessing program efficacy include measures of parenting efficacy, skills performance (based on interim quizzes), parent-child communication, monitoring, boundary setting, and levels of engagement (based on satisfaction, lesson completion and usage metrics). EFFECT MODIFICATION. A limited set of post-hoc analyses will explore homogeneity of the intervention effect across participant subgroups. These analyses help gain precision in knowing whether program effects were optimized for certain groups. POWER. In estimating power, we assumed: (1) three experimental conditions, (2) multiple waves of data collection, (3) random effects associated with estimated outcomes. ranging between .35-.45 and differing types of outcome measures (dichotomous vs. continuous). We project the ability to recruit 80 families (1 parent + 1 youth) in each experimental condition over the recruitment period. With large sample sizes (n=480), we can detect significant effects across most effect sizes.

Tracking Information

NCT #
NCT04214327
Collaborators
Not Provided
Investigators
Principal Investigator: Karol L Kumpfer, PhD Strengthening Families Program LLC