Children's Participation in Cultural and Leisure Activities, Australia methodology

Latest release
Reference period
April 2012
Released
31/10/2012
Next release Unknown
First release

Explanatory notes

Introduction

1 This publication contains results from the 2012 Survey of Children's Participation in Cultural and Leisure Activities, conducted throughout Australia in April 2012 as a supplement to the Australian Bureau of Statistics' (ABS) monthly Labour Force Survey (LFS). Respondents who were in scope of the supplementary survey were asked further questions.

2 The aims of the survey were to: identify characteristics of children who participated in organised sport, cultural activities and selected activities undertaken for recreation and leisure; identify characteristics of children who attended selected cultural venues and events; monitor the use of the internet by children; and identify children who have a mobile phone. The focus on activities outside of school hours is to elicit information on activities that are more likely to be undertaken by children by choice rather than those that are part of the school curriculum.

3 The survey is a continuation of a series of surveys on this topic conducted since April 2000. The previous survey was conducted in April 2009.

4 The publication Labour Force, Australia (cat. no. 6202.0) contains information about survey design, sample redesign, scope, coverage and population benchmarks relevant to the monthly LFS, which also apply to supplementary surveys. It also contains definitions of demographic and labour force characteristics, and information about computer assisted and telephone interviewing which are relevant to both the monthly LFS and supplementary surveys.

Scope

5 The scope of the survey was children aged 5-14 years who were usual residents of private dwellings except:

  • children of certain diplomatic personnel of overseas governments, customarily excluded from censuses and surveys
  • children of overseas residents in Australia
  • children of members of non-Australian defence forces stationed in Australia
  • children living in Indigenous Communities (excluded for operational reasons).
     

Coverage

6 The coverage of the survey was the same as the scope except that the following populations were not enumerated for operational reasons:

  • children in households where all persons aged 15 years and over were members of the Australian permanent defence forces
  • children in households where all persons aged 15 years and over were out on scope for the LFS for any other reason.
     

7 The estimates in this publication relate to persons covered by the survey. In the LFS, coverage rules are applied which aim to ensure that each child is associated with only one dwelling, and hence have only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more details.

Data collection

8 Information was collected from any responsible adult in the household who was asked to respond on behalf of the children in the household. In each selected household, information on cultural, sporting and selected recreational activities was sought for a maximum of three children. The response rate for the survey was approximately 96%. In total, fully responding records were collected about the activities of 7,300 children living in the selected households. In households with four or more children aged 5-14 years, three children were randomly selected for the survey. For the additional children in these households only selected demographic information was collected.

9 Data were collected on children's cultural and sporting activities undertaken outside of school hours over a 12 month period. Data on the frequency of participation relates to the 12 months before interview, while data on the number of hours of participation refers to the last two weeks of school (the most recent two school weeks prior to the interview, including weekends and public holidays). School weeks are weeks during the school term (i.e. not school holidays) including weekends and public holidays. Data were also collected on children's participation in selected recreational activities during the last two weeks of school.

10 In a small number of households where a child was not living with a parent, information on the country of birth of parent and employment status of parent was collected for the person aged 15 years or over living in the household who was most closely related to the child.

11 All interviews were conducted using computer assisted interviewing (CAI) over a two week period during April 2012.

12 Supplementary surveys are not always conducted using the full LFS sample. Since August 1994 the sample for supplementary surveys has been restricted to no more than seven eighths of the LFS sample.

Estimation method

Weighting

13 Weighting is the process of adjusting results from a sample survey to infer results for the total population. To do this a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many persons in the population are represented by the sample person. The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of the unit being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).

Population benchmarks

14 The initial weights are then calibrated to align with independent estimates of the population, referred to as benchmarks. The population included in the benchmarks is the survey scope. This calibration process ensures that the weighted data conform to the independently estimated distribution of the population described by the benchmarks rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons which may occur due to either the random nature of sampling or non-response.

15 The survey was benchmarked to the estimated resident population (ERP) in each state and territory at April 2012 subject to scope exclusions as outlined above.

Estimation

16 Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest.

Reliability of the estimates

17 All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.

18 Sampling error is the difference between the published estimates, derived from a sample of children, and the value that would have been produced if all children in scope for the survey had been included. For more information refer to the Technical Note.

19 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording answers by interviewers, and errors in coding and processing data. Every effort was made to reduce the non-sampling error by careful design and testing of the questionnaire, training and supervision of interviewers, follow-up of respondents, and extensive editing and quality control procedures at all stages of data processing.

Data quality

20 Information recorded in this survey is essentially ‘reported’ by respondents and hence may differ from that which might be obtained from other sources or via other methodologies. This factor should be considered when interpreting the estimates in this publication.

Data comparability

21 The Children's Participation in Cultural and Leisure Activities Survey was previously conducted in 2000, 2003, 2006 and 2009 as supplements to the Labour Force Survey. Computer assisted telephone interviewing was introduced during 2003 and while information was collected using a paper form for the majority of households in 2003, computer assisted interviewing was used for all survey interviews in the 2006, 2009 and 2012 surveys. This change in the methodology is not expected to impact on the comparability of the data between the surveys.

22 Changes between the 2000, 2003 and 2006 surveys are described in 'Comparability with previous ABS surveys' of the Explanatory Notes in Children's Participation in Cultural and Leisure Activities, April 2006 (cat. no. 4901.0). Changes between the 2006 and 2009 surveys are described in 'Comparability with previous ABS surveys' of the Explanatory Notes in Children's Participation in Cultural and Leisure Activities, April 2009 (cat. no. 4901.0). Changes between the 2009 and 2012 surveys are described below.

23 Data collected about information technology have changed between each iteration of this survey. In 2009 data were collected on children's personal safety and security in using mobile phones and the Internet as well as whether a child uses a mobile phone for contacting family or friends more. This information was not collected in 2012, and instead data on specific devices used to access the internet at home were collected.

24 For the 2012 survey data on organised art and craft were collected separately to recreational art and craft. In previous surveys, participation in art and craft activities were collected together, with no separation between the type of participation. As art and craft has been split into two different categories for 2012, no time series comparisons with previous versions of this survey should be made. For further information on recreational art and craft and organised art and craft see the Glossary.

25 For 2012, data collected for participation in cultural activities have changed. Data were not collected on the number of times a child participated in playing a musical instrument, singing, dancing, or drama in the last 12 months. Instead, information on participation in organised art and craft activities was included.

26 The scope of sports included in the 2012 survey has changed from previous years, with the focus now on organised sports. Physical activities that are not sports, such as jogging for exercise and gym workouts, are now excluded.

27 Numerous sports have had changes to their definitions compared with previous survey iterations, including: Australian Rules football; Gymnastics; Indoor soccer; Martial arts; Athletics, track and field; and Touch football. The combined total categories 'Other organised sport', 'At least one organised sport' and 'Organised sports and/or dancing' are therefore different in 2012 compared with previous years, and time series comparisons should not be made. For further information see the Glossary.

Classifications

28 Country of birth data are classified according to the Standard Australian Classification of Countries (SACC), Second Edition, 2008 (cat. no. 1269.0).

Products and services

29 Summary results from this survey, compiled separately for each state and territory, will be available in a Datacube (spreadsheet form) from the ABS website www.abs.gov.au or on request to the ABS.

30 Special tabulations are available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey incorporating data items, populations and geographic areas selected to meet individual requirements. These can be provided in printed or electronic form.

Acknowledgments

31 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated, without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

Next survey

32 The ABS plans to conduct this survey again in April 2015.

Technical note - data quality

Reliability of the estimates

1 Since the estimates in this publication are based on information obtained from a sample, they are subject to sampling variability. That is, they may differ from those estimates that would have been produced if all dwellings had been included in the survey. One measure of the likely difference is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of dwellings (or households) was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all dwellings had been included, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

2 Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate.

\(RSE \%=\left(\frac{S E}{e s t i m a t e}\right) \times 100\)

3 RSEs for CPCLA have been calculated using the Jackknife method of variance estimation. This involves the calculation of 30 'replicate' estimates based on 30 different sub-samples of the obtained sample. The variability of estimates obtained from these sub-samples is used to estimate the sample variability surrounding the estimate.

4 RSEs of all the estimates in this publication are included in the Datacubes released as part of the publication and available from the Data downloads section of the publication.

5 Table 1 contains estimates collected from previous Children's Participation in Cultural and Leisure Activities surveys. The spreadsheets associated with this release contain RSEs for these estimates. The RSEs for 2006 were calculated using a different statistical SE model, which is available from the 2006 Children's Participation in Cultural and Leisure Activities, Australia (cat no. 4901.0). and on the ABS website www.abs.gov.au. From the 2009 survey onwards, the RSEs were directly calculated for each separate estimate. While the direct method is more accurate, the difference between the two methods is usually not significant for most estimates.

6 Only estimates (numbers and proportions) with RSEs less than 25% are considered sufficiently reliable for most purposes. Estimates with RSEs between 25% to 50% have been included with a cell comment to indicate they are subject to high sample variability and should be used with caution. Estimates with RSEs greater than 50% are included with a cell comment to indicate that they are considered too unreliable for general use.

Calculation of standard error

7 Standard errors can be calculated using the estimates (counts or proportions) and their corresponding RSEs. For example Table 5 shows the estimated number of children who participated in playing a musical instrument was 490,200. The RSE Table corresponding to the estimates in Table 5 (included in the Datacubes in the Data downloads section) shows the RSE for this estimate is 3.1%. The SE is calculated by:

\(\begin{aligned} & \text { SE of estimate } \\=&\left(\frac{R S E}{100}\right) \times \text { estimate } \\=& 0.031 \times 490,200 \\=& 15,200(\text { rounded to the nearest hundred) }\end{aligned}\)

8 Therefore, there are about two chances in three that the value that would have been produced if all children had been included in the survey will fall within the range 475,000 to 505,400 and about 19 chances in 20 that the value will fall within the range 459,800 to 520,600. This example is illustrated in the diagram below:

An image showing the ranges involved with data output standard error for the estimate published as thousands.
The image shows a horizontal scale with numbers 459.8, 475.0, 490.2, 505.4 and 520.6, represented as thousands. An arrowed line shows that there are about two chances in three that the true value that would have been produced if all children had been included in the survey will fall within the range 475,000 to 505,400. Another arrowed line shows that there are about 19 chances in 20 that the value will fall within the range 459,800 to 520,600.

9 In general, the size of the SE increases as the size of the estimate increases. Conversely, the RSE decreases as the size of the estimate increases. Very small estimates are subject to such high RSEs that their value for most practical purposes is unreliable and should only be used to aggregate with other estimates to provide derived estimates with RSEs of less than 25%.

Calculation of standard error for means and medians

10 This publication contains means and medians. Both are measures for locating the centre of a set of values, but each measure has its own method of calculation. The mean is the arithmetic average, whereas the median is the middle value of a set of values when the values are sorted in size order.

11 Table 5 shows that the estimated number of children who played a musical instrument was 490,200 with the median number of hours those children played a musical instrument being 3 hours in the last 2 weeks of school.

12 Standard errors can be calculated using the estimates (means or medians) and their corresponding RSEs. The RSE table corresponding to the estimates in Table 5 (see Datacubes in the Data downloads section) shows the RSE for the estimated median number of hours children played a musical instrument in the last 2 weeks of school is 8.5%. The SE is calculated by:

\(\begin{aligned} & \text { SE of estimate } \\=&\left(\frac{RSE\text { }}{100}\right) \times \text { estimate } \\=& 0.085 \times 3 \\=& 0.255 \end{aligned}\)

13 Therefore, there are about two chances in three that the value that would have been produced if all children had been included in the survey will fall within the range 2.7 and 3.3 and about 19 chances in 20 that the value will fall within the range 2.5 to 3.5. This example is illustrated in the diagram below:

An image showing the ranges involved with data output standard error for the estimate published as average hours.
The image shows a horizontal scale with numbers 2.5, 2.7, 3.0, 3.3, and 3.5 represented as average hours. An arrowed line shows that here are about two chances in three that the true value that would have been produced if all children had been included in the survey will fall within the range 2.7 and 3.3. Another arrowed line shows that there are about 19 chances in 20 that the value will fall within the range 2.5 to 3.5.

Differences

14 Published estimates may also be used to calculate the difference between two survey estimates (of numbers or proportions). Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the formula:

\(SE(x-y)=\sqrt{[SE(x)]^{2}+[S E(y)]^{2}}\)

15 While this formula will only be exact for differences between separate and uncorrelated (unrelated) characteristics of sub-populations, it is expected to provide a good approximation for all differences likely to be of interest in this publication.

Significance testing

16 A statistical significance test for any comparisons between estimates can be performed to determine whether it is likely that there is a true difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula above. This standard error is then used to calculate the following test statistic:

\(\Large\left(\frac{x-y}{SE(x-y)}\right)\)

17 If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two between the populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.

Glossary

Show all

Quality declaration

Institutional environment

Relevance

Timeliness

Accuracy

Coherence

Interpretability

Accessibility

Abbreviations

Show all

Back to top of the page