PSTAP Supporting Statement Part B 2900 0864

PSTAP Supporting Statement Part B 2900 0864.docx

Post Separation Transition Assistance Program (TAP) Assessment Survey

OMB: 2900-0864

Document [docx]
Download: docx | pdf


Office of Management and Budget (OMB)

Clearance Request


for the


Post-Separation Transition Assistance Program (PSTAP)

Assessment and Section 4306 Study


Supporting Statement B:


Collections of Information Employing

Statistical Methods



  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collect and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicated the expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.



The Post-Separation Transition Assistance Program (PSTAP) and 4306 Study will be administered using the annual Cross-Sectional and Longitudinal surveys of Veterans. The PSTAP surveys are currently approved by OMB through Clearance 2900-0864. The PSTAP Cross-Sectional Survey is a census of three cohorts of Veterans who separated from service approximately six (6) months, one (1) year, and three (3) years prior to the survey date. Veterans who respond “yes” to being contacted for future surveys in the Cross-Sectional Survey are the participants for the Longitudinal Survey.


We will select the 4306 Study sample using random sampling from four cohorts based on the date of TAP pre-separation counseling completion: 1a) attended TAP pre-separation counseling from October 2019 through September 2020, 1b) attended one year prior to changes to TAP implemented based on recommendations from the 4305 study, 2) attended one year after changes TAP based on the 4305 recommendations, and 3) attended from October 2018 through September 2019. The survey will ask 4306 Study participants for permission to contact them again for future annual surveys. It is expected that changes made to TAP from the 4305 Study will be implemented in 2024, meaning the first survey of Veterans in those cohorts will take place in June of 2026.


There is some degree of overlap between the PSTAP and 4306 study samples. Combining the studies and using a single survey instrument will reduce costs and burden to respondents. Veterans who separated in a date represented by both an upcoming PSTAP and 4306 cohort will automatically be selected for both studies. For sampling and weighting purposes for the 4306 study, we will consider these Veterans to be sampled with certainty. In the 2024 survey, roughly 13,000 Veterans were in both studies. Because the PSTAP and 4306 study share the same instruments, Veterans in cohorts in both studies will receive only one survey, and their responses will be used for both studies.


As this is an ongoing study, several Longitudinal Survey cohorts are already being surveyed. Each year, an additional cohort is added to the Longitudinal Survey from the previous year’s Cross-Sectional Survey 6-month cohort.


Figure A below provides a visual representation of the cohorts for better understanding. While the Cross-Sectional study surveys three separate cohorts each year, only the 6-month cohort is invited into the longitudinal study in the following year. The 4306 Study cohorts will be administered based on the results and timing of changes made from the 4305 Study.

Figure A. Cross-Sectional and Longitudinal Cohorts by Survey Year


Annually, there are roughly 200,000 total separations from the Armed Forces. Each cohort in the study consists of a two-month window, meaning roughly 33,130 separated servicemembers are in each cohort.


Given our experience collecting data from 2019 through 2024, we estimate that just over 10 percent of each cohort, or about 3,500 veterans, will respond to the PSTAP Cross-Sectional survey and approximately half of respondents (7.5% of the cohort, or 1,750 Veterans) will agree to be contacted for future surveys.


The estimated follow-up response rate for the longitudinal survey is higher than for the cross-sectional survey, because the sample has already responded to the cross-sectional survey and agreed to receive an invitation for future surveys. If 1,750 Veterans agree to participate in the follow up survey, then we expect that 875 (an assumed response rate of 50% from the starting sample of Veterans who agreed to receive additional contacts) will respond to the longitudinal survey, or approximately 25% of the starting cohort of 3,500.


Based on this information, a power analysis was conducted to assess the statistical power of responses using Minimal Detectible Differences (MDDs). The MDD defines the difference in proportions for an outcome measure (e.g., employment) for the treatment group and the control group that must exist to detect a statistically significant relationship. The tables assume a 95% confidence level (alpha = .05), a one-tailed significance test. For this study, an MDD of 10 percent or less will be acceptable to draw conclusions from the survey responses.


We performed power calculations for between-cohort comparisons because these are the statistical analyses we plan to conduct for future PSTAP reports. We estimate MDDs for the longitudinal study cohorts because these cohorts represent the smallest expected number of respondents after attrition. Table A provides MDDs assuming a comparison of proportions measured within two cohorts, where the estimated proportion in one of the comparison cohorts is 50% (the 50% assumption assumes the worst-case scenario, resulting in an upper bound for the estimated MDD). We then estimated MDDs for the best-case scenario (27%), expected response rate (25%), and worst-case scenario (20%), respectively, of the 3,500 cross sectional cohort respondents. The need to adjust weights for nonresponse introduces a design effect; we assume a design effect of 1.5 when estimating when estimating MDDs due to the need to adjust weights to account for differential nonresponse across subgroups of interest.


As Table A shows, the assumed response rates will result in MDDs of 9.1 percent or less under all scenarios. Therefore, we expect that the threshold of 10 percent or less MDD for between-cohort analyses will be met even after accounting for attrition.


Table A. Minimum Detectable Differences, 95% confidence level (alpha = .05) for between-cohort comparisons*

Number of respondents, assuming 27%, 25%, and 20% response rates from a starting sample of 3,500 per cohort

N = 950 per cohort

N = 875 per cohort

N = 700 per cohort

7.8%

8.2%

9.1%

*MDDs are for one-tailed comparisons. Assumes a comparison of estimates where the estimated value in the comparison cohort is 50%. MDD estimates assume a design effect of 1.5.


The PSTAP Assessment is fielded no more than once in a 12-month period. For the purposes of this information collection request (ICR) renewal of data collection, the three-year average annual burden calculation is shown in Table B below. Attrition is a concern in all longitudinal studies. Attempts to boost participation rates can often be mitigated by using techniques such as incentives, personal interviews, telephone reminders, etc., which carry an additional monetary burden. Therefore, the second and third waves of data collection (i.e., Year 2 and Year 3) are estimated using data from the past years of the PSTAP Assessment.

Table B. Average Annual Hourly Burden Calculation for the PSTAP Assessment

Cross-Sectional Survey

 

Cohorts 22-24

Cohorts 25-27

Cohorts 28-30

Minutes

Hourly Burden

 

Retention Rate*

Responses

Retention Rate*

Responses

Retention Rate*

Responses

Year 1

n/a

10,500

 

 

 

 

18.5

3,238

Year 2

n/a

 

n/a

10,500

 

 

18.5

3,238

Year 3

n/a

 

n/a

 

n/a

10,500

18.5

3,238

 

 

 

 

 

Average Annual Burden

3,238

Longitudinal Survey

 

Cohorts 1-19

Cohort 22

Cohort 25

Minutes

Hourly Burden

 

Retention Rate*

Responses

Retention Rate*

Responses

Retention Rate*

Responses

Year 1

n/a

4,500

 

 

 

 

18.5

1,388

Year 2

88%

3,960

n/a

950

 

 

18.5

1,514

Year 3

81%

3,645

88%

836

n/a

925

18.5

1,667

 

 

 

 

 

Average Annual Burden

1,523






Total Average Annual Burden

4,761

*Retention rates have been calculated based on previous years of the PSTAP Assessment.


Based on expected response rates and retention rates for the study, the average annual burden for the PSTAP Assessment is 4,761 hours. The retention rates for this study were developed based on a review of the retention rates of the Vocational Rehabilitation and Employment (VR&E) longitudinal study currently being conducted by VA as well as previous years of the PSTAP Assessment.


Section 4306 Study


The purpose of the study is to document the outcomes of cohorts of TAP participants over a five-year period and to compare the outcomes across cohorts. Public Law 116-315 did not provide specific precision requirements for the longitudinal study. The study team conducted a power analysis to examine the sample sizes needed to produce a minimum detectable difference (MDD) of 5 percentage points across cohorts. The MDD is the difference in proportions for an outcome measure (e.g., employment) for two groups that must exist to detect a statistically significant relationship. For this study, we consider an MDD of 5% or less sufficient to detect expected differences in major outcomes across cohorts. We assumed a 95% confidence level (alpha = .05), a one-tailed significance test, and a treatment group proportion of 50%. Using a base proportion of 50% provides a “worst-case” estimate of statistical power. A sample size of 2,400 per cohort provides an MDD of 5% or less.


In addition to the MDD, it will also be important to provide estimates of outcomes for each cohort separately. Given a sample size of 2,400, the margin of error (MOE) will be 2.5%. This assumes a 95% confidence level (alpha = .05) and a proportion of 50%.


It will be important for the analysis to be powered to examine outcomes in each cohort separately for subgroups of Veterans defined on the basis of characteristics such as rank, race and ethnicity, and gender. Table C shows the MOE for subgroups of interest based on the expected distribution of characteristics in the population and differential response rates.1 Table C provides MOEs under the design effect of 1.5 and a total sample size of 2,400.2 The table shows that all but one of the MOEs is less than 10%, and several are under 5%. (The only MOE that is higher than 10% is Coast Guard at 19%, because it is a small percentage of the population.) We believe that these are acceptable MOEs for subgroup estimates. For subgroups with higher MOEs, it would be possible to oversample them to obtain lower MOEs. For example, we could oversample females to lower the MOE from 7% to 5%. However, this would increase the MOE for males as well as the design effect, which would require a larger overall sample size.


Overall, the power analysis indicates that a sample size of 2,400 per cohort, or 7,200 overall, will be sufficient. This sample size will allow for detecting differences in outcomes of 5 percentage points across cohorts and a margin of error of 2.5 percentage points for each cohort. Moreover, it will permit subgroup estimates with a precision of less than 10 percentage points for all subgroups.

Table C. Margins of Error for Veteran Subgroups in Each Cohort, 95% confidence level (alpha = .05)

Variable

Group

Rough estimated percentage of total

Rough estimate of MOE for p=50% (Design Effect of 1.5)

Branch

Air Force

18

5.77%

 

Army

46

3.61%

 

Coast Guard

1.6

19.37%

 

Marine Corps

18

5.77%

 

Navy

17

5.94%

Grade

E-1 to E-3

16

6.13%

 

E-4 to E-6

60

3.16%

 

E-7 to E-9

10

7.75%

 

O-1 and above

14

6.55%

Component

Active Duty

84

2.67%

 

Guard / Reserve

16

6.13%

Separation Type

Voluntary

65

3.04%

 

Involuntary

23

5.11%

 

Unknown

12

7.07%

Race

White

73

2.87%

 

Black/African American

15

6.33%

 

Other

12

7.07%

Gender

Male

82

2.71%

 

Female

18

5.77%

Source: Estimates calculated using data from the VA Department of Defense Identity Repository (VADIR)


The 4306 Study is fielded no more than once in a 12-month period. For the purposes of this information collection request (ICR), the three-year average annual burden calculation is shown in Table D below. Attrition is a concern in all longitudinal studies. Attempts to boost participation rates can often be mitigated by using techniques such as incentives, personal interviews, telephone reminders, etc., which carry an additional monetary burden. Therefore, the second and third waves of data collection (i.e., Year 2 and Year 3) are estimated using data from the past years of the PSTAP Assessment and other studies.


There will be partial overlap between some PSTAP cohorts and 4306 cohorts. If a Veteran appears in both a PSTAP and 4306 cohort based on their separation date, and they respond to the survey, we will use their responses in both the PSTAP and 4306 studies. Table E shows the expected responses. The assumptions used to estimate the extent of overlap between 4306 cohorts and PSTAP cohorts appear in Table E.


Table D. Assumed overlap between 4306 cohorts and PSTAP cohorts

4306 cohort

Expected separation date range

PSTAP cohort overlap

1a

Oct 2020-Sept 2021

No overlap (not part of upcoming PSTAP data collection)

1b

Jan 2024-Dec 2024

Overlap with half of PSTAP cohorts 16 (Dec 23 - Jan 24) and 17 (June-July 23)

2

Jan 2025-Dec 2025

Overlap with PSTAP cohorts 16 (Dec 23-Jan 24), 19 (Dec 24-Jan 25), and 20 (June-July 24)

3

October 2019-September 2020

No overlap (not part of upcoming PSTAP data collection)



Table E provides the annual burden rates for the 4306 cohorts that appear in Table F. The expected numbers of 4306 study responses exclude participants from overlapping cohorts who participate in the PSTAP.


Table E. Average Annual Burden Rates for the 4306 Study


Based on expected response rates and retention rates for the study, the average annual burden for the 4306 Study is 4,821 hours. The retention rates for this study were developed based on a review of the retention rates of the Vocational Rehabilitation and Employment (VR&E) longitudinal study currently being conducted by VA as well as previous years of the PSTAP Assessment.


Table F below provides a summary of the average annual burden for all three surveys over the next three years. In total, the average annual burden under this ICR is 9,582 hours.


Table F. Combined Average Annual Burden for All Surveys

  1. Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection;

  • Estimation procedure;

  • Degree of accuracy needed for the purpose described in the justification;

  • Unusual problems requiring specialized sampling procedures; and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


PSTAP study. As discussed in the previous question (Item 1), the study solicits responses from Veterans through two surveys. The Cross-Sectional Survey is a census of three cohorts of Veterans separating from service for approximately six (6) months, one (1) year, and three (3) years. As a result, the Contractor does not utilize sampling or stratification procedures to identify participants. Participants in the Longitudinal Survey include a subset of Veterans who participated in the Cross-Sectional Survey and indicated on the Cross-Sectional Survey that they would be willing to be contacted for future surveys.


Due to the longitudinal study’s dependence on responses to the cross-sectional survey to generate a set of potential respondents, it is possible that lower than expected response rates to the cross-sectional survey could result in a longitudinal sample that is not large enough to meet the necessary sample size determined by the power analysis. As shown above, current response rates allow the contractor to conduct a thorough analysis of Longitudinal Survey responses and draw accurate conclusions.


Post-stratification weights shall be used, drawing upon the population file to provide control totals; more detail is provided in the following item. For estimation of frequencies, means, cross tabulations, and other statistics, the Contractor will utilize the post-stratification weights. The Contractor will estimate weighted statistics representative of the population and will include the weighted standard errors associated with each estimate. The Contractor will also produce subgroup analyses. For analyses comparing subgroups, differences shall undergo significance testing at the 95% confidence level.


The PSTAP Assessment will not be conducted more frequently than once in a 12-month period. At the end of the instrument, Veterans are requested to opt into future waves of data collection and are informed in writing that they shall not be contacted more than once in a 12-month period.


4306 Study. The 4306 Study cohorts will be drawn using a combination of 1) random sampling from the population of Veterans in each cohort, and 2) Veterans who appear in the PSTAP cohorts, if their pre-separation counseling completion date falls within a 4306 study cohort. To determine the initial sample size for each cohort to obtain 2,400 respondents in Year 5, we needed to estimate the initial response rate and attrition in each subsequent year. We assumed an initial response rate of 15%.3 If a Veteran who separated in the range of one of the 4306 cohorts, and appears in a PSTAP cross-sectional cohort, they will receive the survey and, if they respond, will be included in the analysis for both the 4306 Study and the PSTAP study.


All longitudinal studies experience attrition among respondents who initially participate but drop out of subsequent interviews. We assume a retention rate of 70% at each subsequent interview. That is, we assume that in each year, 70% of those who responded in the previous year will respond.4 Table G shows the initial sample size for each cohort and expected sample size in each year based on a Year 1 response rate of 15% and a 70% annual attrition rate and an expected design effect of 1.5. The initial sample size is 66,639 per cohort to obtain a final sample size of 2,400 in Year 5 (2,400*.15*.704). The expected population size of each cohort (approximately 150,000 Veterans) is large enough to support a sample of 66,639 Veterans in each cohort. We may need to augment the initial sample size given response rates from future PSTAP data collections; this should be feasible given that the population size of each cohort can support larger samples if needed.

Table G. Sample Sizes Per Cohort

Data Collection

Number of Veterans

Population

150,000

Sample

66,639

Year 1

9,996

Year 2

6,997

Year 3

4,898

Year 4

3,429

Year 5

2,400


  1. Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data to be generalized to the universe studied.


Nonresponse is mitigated for the PSTAP Assessment and 4306 Study in four primary ways: (1) offering multiple modes; (2) multiple contact attempts; (3) minimizing the instrument length; and (4) increasing the ease of completing a paper questionnaire by optimizing the visual layout.


Survey response rates, in general, have been dropping over the last few decades. Some of the ways response rates are boosted include offering multiple modes. While some survey modes, such as those involving personal interviews, can also come at significant cost, a combination mode of web, paper, and other electronic methods is both cost-effective and provides anonymity for the respondent. Web and other electronic methods are being offered first to reduce the number of paper surveys that need to be mailed to Veterans who do not respond after the first attempt. Utilizing these methods for the longitudinal survey will also help to reduce burden of the survey respondents and allow for follow-up reminders to further reduce the need for paper surveys.


In year 1 of Cross-Sectional Survey deployment (2019), the PSTAP Assessment received lower than expected response rates due to limited communication methods implemented. In year 2 (2020), the Contractor added emails to the survey methods, which increased response rates by roughly 10 percentage points. This method is currently being implemented with the PSTAP Assessment and has shown higher success rates in 2020 and subsequent years.


The final instrument developed is the culmination of cognitive interviews as well as close coordination between VA, the Interagency Partners, and the Contractor. As with any survey development, there is often a desire to include more questions than is feasible without jeopardizing response rates. At all junctures, there was close coordination to ensure that both the concepts being measured and the number of questions were kept at a minimum to decrease respondent burden.


Visual layout can reduce the effort required by a respondent to encode their responses to a question and mark the right category. Such techniques as using grids and alternate shading can decrease this burden.


Despite this multi-pronged strategy, achieving an 80% response rate is unlikely, and has been difficult to achieve for even the largest federal surveys. To assess and mitigate any potential bias caused by nonresponse, the Contractor will conduct a nonresponse-bias analysis (NRBA) and produce nonresponse-adjusted post-stratification weights. The NRBA will draw on demographic information available from the population file (e.g., age, military service branch, grade / rank, etc.) to use as auxiliary variables. It will, at minimum, include the following:


  • Comparison of response rates for different subgroups;

  • Comparisons of the distributions of auxiliary variables for respondents and the full population; and

  • Use of a classification tree algorithm to identify subgroups with low response rates.


Any variables for which the distribution differs significantly (typically defined as p<0.05 for a chi-square test or t-test, as appropriate) between respondents and the full population, or response rates vary significantly by subgroup, will be considered for use in post-stratification weighting. If many potential weighting variables are identified, priority will be given to variables that are more highly correlated with the primary outcomes of interest. The Contractor will post-stratify the survey weights to the population file control totals for the selected variables, collapsing small cells as necessary. After weighting, the distributions of all auxiliary variables will again be compared to the corresponding population distributions to verify that the potential for bias has been reduced (fewer significant differences between the weighted distributions and the population distributions).

  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


Both the PSTAP Cross-Sectional and Longitudinal Surveys were pretested using interviews and a web survey questionnaire. Referrals for the pretesting subjects were obtained by the Contractor’s internal recruiting system. In addition, VA provided test subjects to allow for additional testing with focus on question wording.


Interviews were conducted with four (4) members of the public consistent with OMB regulations that state testing shall not exceed nine (9) members of the public without applying for a generic clearance. These interviews were conducted from July 1, 2019 through August 15, 2019. In addition, program experts reviewed the survey and provided additional input.


The survey pretests were conducted online. Each test subject was sent a link to the online version of the survey which included all instructions, questions, and a list of reflection questions. The reflection questions asked test respondents to provide feedback on the length of time to complete the survey, if the design of the survey allowed for ease of understanding of questions, and allowed for feedback on specific questions. Both surveys were approved by OMB in previous years.


For this renewal request, there are only a few questions. Of those questions, several were taken from previously approved surveys to ensure they have been vetted properly. Outside of those questions, only additional options were added to longstanding questions based on the needs of the 4306 study.



  1. Provide the name and telephone numbers of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who actually collect and/or analyze the information for the agency.


The PSTAP Assessment is the culmination of significant federal planning and interagency coordination. In October 2018, a contract was awarded to Economic Systems Inc, and their subcontractor, Westat, to develop and maintain survey instruments and administer the surveys. Key personnel involved in the final stages of the design of the survey instrument and statistical planning of the study include:


Veterans Benefits Administration (202-378-3576)

Christina Zais, Assistant Director Transition

H. David Carter, Contracting Officer’s Representative

Michael Deeb, Lead Program Analyst


Jerome Jones, Ph.D., PMWG Chair (703-614-8676)

More than 40 representatives from the 7 federal agencies in this group.


Economic Systems Inc. 703-333-2197

Jacob Denne, Project Manager


Westat 301-212-2174

Jeffrey Taylor, Research Manager

Jay Clark, Senior Statistician

1 Data from the VA Department of Defense Identity Repository (VADIR)

2 There are two sources of potential design effects in the proposed study. The first source of design effect is to the need to adjust weights to account for differential nonresponse across subgroups of interest. The second source is the potential inclusion of participants in the current PSTAP Assessment in the proposed longitudinal study. This will be discussed later in this section.

3 We base the expected response rate on the observed response rates from the 2020 PSTAP cross-sectional study, which recruited a similar population of recently separated Veterans. The 2020 PSTAP cross-sectional study achieved a response rate of 13.4%. A higher response rate than the 2020 PSTAP cross-sectional study is likely especially if we offer an incentive for survey completion.

4 We expect a similar attrition rate to that achieved by the PSTAP longitudinal study. The 2020 PSTAP longitudinal study received responses to the first annual follow up from 62.5% of baseline respondents who agreed to participate in the ongoing study. We believe that it is possible to achieve a 70% average attrition rate over the entire 5-year study period.

Page 12 of 12

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement for VBA Generic Customer Survey Clearance
AuthorVBA
File Modified0000-00-00
File Created2025-09-28

© 2025 OMB.report | Privacy Policy