How Australians Use Their Time methodology

This is not the latest release View the latest release
Reference period
2006
Released
21/02/2008

Explanatory notes

Introduction

1 This publication presents statistics compiled from data, collected by both computer assisted personal interview and respondent diary completion, in the 2006 Time Use Survey (TUS). The 2006 survey was the third national time use survey conducted in Australia. Previous time use surveys were conducted in 1992 and 1997.

2 The major aims of the 2006 Time Use Survey were to:

  • measure the daily activity patterns of people in Australia to establish the current Australian time use profile;
  • provide information on differences in patterns of paid work and unpaid household and community work by sex and other characteristics;
  • measure the volume of unpaid household, voluntary and community work, in its own right and as a basis for a satellite account for unpaid household work;
  • provide information on the ways in which Australians balance work and family obligations;
  • provide information on time use and its effects on a wide range of other areas of interest, such as:
     
    • family dynamics;
    • caring for people with disabilities and older people;
    • caring for children;
    • education activities;
    • leisure activities;
    • fitness and health activities;
    • radio and television listening/watching;
    • use of other technology;
    • transport, public and private;
    • outsourcing of domestic tasks;
    • patterns of interaction with others; and
       
  • allow comparisons with the 1992 and 1997 surveys in order to identify changes in patterns of time use over time.
     

Conduct of the survey

3 Survey enumeration was conducted over four 13-day periods in 2006, chosen to contain a representative proportion of public holidays and school holidays:

  • 20th February - 4th March 2006;
  • 24th April - 6th May 2006;
  • 26th June - 8th July 2006; and
  • 23rd October - 4th November 2006.
     

Scope

    4 The scope of the estimates from this survey is all usual residents in private dwellings throughout Australia, excluding very remote dwellings. The survey collected information by personal interview from usual residents of private dwellings in urban and rural areas of Australia, covering about 98 per cent of the people living in Australia. Private dwellings are houses, flats, home units, caravans, garages, tents and other structures that are used as places of residence at the time of interview. Long-stay caravan parks are also included. These are distinct from non-private dwellings which include hotels, boarding schools, boarding houses and institutions. Residents of non-private dwellings are excluded.

    5 The survey excludes:

    • households which contain members of non-Australian defence forces stationed in Australia;
    • households which contain diplomatic personnel of overseas governments; and
    • households in collection districts defined as very remote or Indigenous Communities.
       

    Sample design

      6 The 2006 Time Use Survey results were compiled from a sample of about 3,900 households across Australia, sufficient to provide estimates for those characteristics which are relatively common and for sub populations which are relatively large and spread fairly evenly geographically. Because time use activity on weekend days is quite different to time use on weekdays, for 2006, the proportion of total diary days allocated to weekend days was increased compared with earlier time use surveys to reduce sample error in many total time use estimates by activity, and to enable better time use estimates to compare time use for both Saturday and Sunday individually and with the weekdays.

      7 The survey was conducted using a stratified multistage area sample of private dwellings (houses, flats etc.) in both urban and rural areas in all States and Territories, except for very remote parts of Australia. The sample was selected to ensure that each dwelling within each of the geographic areas covered by the survey had an equal probability of selection. Different states and regions were allocated sample roughly in proportion to their population so that accurate national estimates could be obtained. All persons usually resident within the selected dwellings were included in the survey. A detailed description of the sample design can be found in the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0).

      Data collection

      8 Information was obtained in the Time Use Survey partly by interview and partly by self-completion diary. Trained ABS interviewers collected information, about the household and other members of the household, from an adult member of the selected household. The interviewer also instructed the interviewee on how resident adult household members (aged 15 years and over) were to record their activities (including their nature, timing and duration), in the diaries supplied, over two specified days. Instructions and two completed sample pages were also provided at the beginning of the diaries to guide respondents on the type of information and level of detail required. The layout of the diary was unchanged from the 1997 TUS.

      9 The diary was divided into two separate days, with fixed intervals of five minutes covering 24 hours from 12 am. Five columns with question headings organised responses into primary and secondary activities, for whom the activity was done, who else was there and where the activity took place. The diary included several questions at the start and end of each diary day relating to the individual. Diaries were collected by the interviewer on a return visit or mailed back to the ABS in a Reply Paid envelope.

      Data processing

      10 A combination of clerical and computer-based systems were used to process data obtained in the survey. It was necessary to employ a variety of methods to process and edit the data which reflected the different modes used to collect data from the interview and diary components of the surveys.

      11 Processing of the diaries involved sorting the reported activities into episodes, editing where necessary and recording episodes into a data entry system where a look-up list of activities and detailed category screens allowed for consistency in coding. Interactive range and logical edits were used to detect unacceptable values and ensure that fields were appropriately coded. The quality of diary coding was also regularly monitored. A more detailed description of data processing can be found in the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0).

      Data items

      12 Basic demographic and socio-economic characteristics were collected. These included age, sex, birthplace, employment, education and income. The activity classification in the tables indicates the type of activity information collected. Further disaggregation is possible for some items depending on the participation rate. An activity episode can contain the following elements:

      • start and finish time;
      • primary activity;
      • secondary activity;
      • person or group 'for whom' the activity is done;
      • location, both physical and spatial;
      • mode of transport for travel items;
      • technology/communication code where relevant;
      • who the respondent was with;
      • age details of any household people present; and
      • health details of any household people present.
         

        13 A complete activity classification and more detailed description of data items are available in the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0).

        14 Income is used in several tables as a characteristic of the persons for whom time use is being presented. Household equivalised gross weekly income is the income measure, derivable from TUS information, that best allows for comparisons of time use relative to income, because it allows comparison of the relative economic wellbeing of people living in households of different sizes and composition. For more information on equivalised income, see Household Income and Income Distribution, Australia, 2005-06 (Cat. no. 6523.0)

        Changes since the 1997 survey

        15 The 2006 Time Use Survey was designed to be as comparable as possible with the 1997 survey. However, user consultation identified some additional data items, and modifications to data items that had been used in 1997, to improve the usefulness of the survey results. Notwithstanding these improvements to quality and usefulness for 2006, a high level of comparability with 1997 survey results has been achieved.

        16 The activity classification used for the 1997 survey was reviewed with users and a number of minor changes were made, which have a negligible impact on comparability between 1997 and 2006 results.

        17 Changes to diary episode data items included collecting additional detail in 2006 (but do not affect comparability at the higher levels that remain consistent between 1997 and 2006):

        • additional communication/technology categories to encompass changes in technology and computer usage since 1997 and to allow for all technology use to be captured for primary activities instead of just the technology used for communication;
        • additional 'for whom' categories to distinguish between household members who were well and those who were sick, frail or had a disability;
        • an additional category in the 'spatial location' item for waiting in a car; and
        • the 'country, bush, beach' category for 'physical location' was separated into its individual components.
           

        18 Coding rule changes included:

        • in the 2006 survey, there was a difference in coding to the activity 'talking/reading/playing with children' compared with the 1997 survey. In 1997, when a respondent reported 'talking to family' and there were children under 15 and adults present, this activity was usually coded to 'talking for recreation and leisure' and no time was allocated to 'talking to children' unless it was clear that the child participated in the conversation. However, in 2006, episodes of 'talking to family' were split into two episodes to include the time for talking to adults and time for talking to children, when children under 15 were present; this change is likely to add to the time spent on the 'talking/reading/playing with children' category, and lessen time spent on general conversation. These changes should be taken into account when making comparisons with the earlier surveys;
        • in 2006, all episodes of 'talking to children' were coded as primary activities and 'talking to adults' was coded as a secondary activity as all child care activities other than passive minding were treated as primary activities;
        • communication codes were used more consistently in 2006 compared to 1997, for example, in 2006 no communication/technology codes were used for employment activities as this information was generally not provided in respondents' diaries;
        • the 'for whom' information was coded more consistently in 2006, for example, all personal care activities such as sleeping, eating or personal hygiene were coded as being 'for self';
        • travel to and from eating locales was coded to travel associated with purchasing in 2006 whereas in 1997 it was coded to travel associated with recreation and leisure. This change may impact on the comparability of these two activities between the 1997 and 2006 surveys; and
        • in 2006, there were changes to the way that purchasing episodes were coded. In 1997, purchasing a meal or alcoholic drink at an eating or drinking locale was coded to either eating or social/alcoholic drinking. However, in 2006, the first five minutes of episodes at the eating or drinking locale were coded to purchasing consumer goods and the remainder was coded to either eating or drinking. This should be taken into account when making comparisons with the earlier surveys.
           

        Activity aspects

          19 An activity can be categorised in many different ways, to reflect different aspects of time use. There is the nature of the activities, but also the intent or purpose. The inclusion of a 'Who did you do this for?' ('for whom') column in the diary of the 1997 and 2006 surveys provided direct information about the purpose of an activity. Activities which can be recorded as 'helping', 'caring' or 'unpaid community services' are not always reported at the 'intent or purpose' level. In fact, they consist of, and are usually described in terms of, a wide range of specific acts such as visiting, cooking, nursing, lending books, washing clothes, moving furniture, and organising fundraising.

          20 In 1997 and in 2006, activities were coded to their basic nature. This is identified as the 'nature of activity' classification. The 'for whom' item is then used in conjunction with the nature classification to derive the 'purpose of activity' classification. The 'purpose of activity' classification provides the maximum information on volunteering, caring and helping. The 'concordance of activities' information, shown in tables 1 and 2, is derived using the 'for whom' and communication/technology items in conjunction with the nature of activity classification to allow comparison across the three surveys. The totals for activities in tables 1 and 2 may vary from the activity totals in tables 3 to 21 as tables 1 and 2 use the concordance of activities to enable comparison with the 1992 survey, while the remaining tables use the 'purpose of activity' classification. All three classifications are correct for their appropriate uses.

          21 The Nature, Purpose and Concordance Comparison table in Appendix 1 shows the differences between average time for activities.

          Survey methodology

          Weighting

          22 'Weighting' is the process of adjusting results from a sample survey to infer results for the total population. To do this, a 'weight' is allocated to each sample unit, e.g. person-day, person or household. The weight is a value which indicates how many population units are represented by the sample unit.

          23 The 2006 Time Use Survey was conducted over four 13-day collection periods and population distributions appropriate to each period were used in the weighting process. Significant changes in the weighting methodology from that used in the 1997 Time Use Survey have been instituted to:

          • calculate more appropriate initial selection weights;
          • better deal with differences in non-response and sampling fractions for each State and Territory by region; and
          • utilise a finer level estimated distribution of the person and person-day populations.
             

          Person-day weights

          24 Person-day estimates obtained from the Time Use Survey were derived using a ratio estimation procedure. Estimates from the survey were obtained by weighting person-day responses to represent the in-scope population of the survey. Calculation of weights for person-days was carried out in two steps, the first being the calculation of the initial weight, and the second being the calibration to population benchmarks. For further information refer to the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0).

          Person weights and household weights

          25 Person and household estimates obtained from the Time Use Survey were also derived using a ratio estimation procedure. Person and household estimates from the survey were obtained by weighting person level and household level responses, respectively, to represent the in-scope population of the survey. Calculation of weights for persons and households was carried out in two steps, the first being the calculation of the initial weight, and the second being the calibration to population and household benchmarks, respectively. For further information refer to the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0). Table 19 was produced using the household weight.

          Benchmarks

          26 For the person-day and person benchmarking, two sets of population distributions (benchmarks) were used for each collection period of the Time Use Survey. Similarly, for the household benchmarking, two sets of household benchmarks were used for each collection period.

          Person-day benchmarks

          27 The first set of person-day benchmarks was at the State by region (capital city/rest of State) by sex level and was obtained by averaging population distribution estimates on each side of the time use collection periods.

          28 The second set of benchmarks was at the sex by age group by employment status by day type level. For further information refer to the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0).

          Person benchmarks

          29 The first set of person level benchmarks are the same as those used for the person-day benchmarking. The second set of person benchmarks use sex by age group by employment status as for the second set of the person-day benchmarks, but without the split by day type.

          Household benchmarks

          30 The first set of household benchmarks was at the State by region level, and the second at the household composition (number of adults and children) level. Both sets were obtained by averaging resident household estimates available for periods just before and just after each of the four 2006 Time Use Survey enumeration periods. For further information refer to the Time Use Survey: User Guide, 2006 (Cat. no. 4150.0).

          Reliability of the estimates

          31 Two types of error are possible in an estimate based on a sample survey: sampling error and non-sampling error.

          Non-sampling error

          32 Non-sampling error can occur in any collection, whether the estimates are derived from a sample or from a complete collection such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing of data.

          33 Non-sampling errors are difficult to quantify in any collection. However every effort is made to reduce non-sampling error by careful design and testing of the questionnaire, training of interviewers and data entry staff, and extensive editing and quality control procedures at all stages of data processing.

          Non-response bias

          34 One of the main sources of non-sampling error is non-response when persons resident in households selected in the survey cannot be contacted, or if they are contacted are unable or are unwilling to participate. Non-response can affect the reliability of results and can introduce bias. The magnitude of any bias depends upon the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not. For the 2006 TUS, some of the non-response resulted from logistical difficulty in aligning interview times with allocated diary days rather than the unwillingness of selected household members to participate in the survey.

          Steps to minimise errors

          35 Every effort is made to reduce non-sample error by careful design and testing of the questionnaire and diaries, training of interviewers and other staff, using detailed coding instructions and extensive editing and quality control procedures at all stages of processing.

          Sampling error

          36 Sampling error is a measure of the variability that occurs by chance because a sample, rather than the entire population, is surveyed. Since the estimates in the Time Use Survey publication are based on information obtained from occupants of a sample of dwellings they are subject to sampling variability. That is, they may differ from the figures that would have been produced if all dwellings had been included in the survey. One measure of the likely difference is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of dwellings was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about nineteen chances in twenty (95%) that the difference will be less than two SEs.

          37 Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate:

          \(R S E \%=\frac{S E}{e s t i m a t e} \times 100\)

          38 The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate.

          39 RSEs for estimates from the 2006 Time Use Survey are published for the first time in 'direct' form. Previously, a statistical model was produced that related the size of estimates with their corresponding RSEs, and this information was displayed via a 'SE table'. For the 2006 Time Use Survey, RSEs for estimates have been calculated for each estimate and published individually. The Grouped Jackknife method of variance estimation is used for this process, which involved the calculation of 60 'replicate' estimates based on 60 different subsamples of the original sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the main estimate.

          40 In the tables in this publication, only estimates (numbers, percentages, participation rates and means) with RSEs less than 25% are considered sufficiently reliable for most purposes. However, estimates with large RSEs (between 25% and 50%) have been included and are marked with a cell comment to indicate they have a relative standard error of 25% to 50% and should be used with caution. Estimates with RSEs of 50% or more are marked with a cell comment to indicate that they are subject to sampling variability too high for most practical purposes.

          Standard errors of proportions and percentages

          41 Proportions and percentages formed from the ratio of two estimates are also subject to sampling error. The size of the error depends on the accuracy of both the numerator and the denominator. The RSE of a proportion or percentage can be approximated using the formula

          \(R S E\left(\frac{x}{y}\right)=\sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}\)

          42 This formula is only valid when \(\large{^x}\) is a subset of \(\large{^y}\).

          Standard errors of differences

          43 The difference between two survey estimates (of numbers or percentages) is itself an estimate and is therefore subject to sampling variability. The SE of the difference between two survey estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates can be calculated using the formula

          \(S E(x-y)=\sqrt{[S E(x) ]^{2}+[S E(y)]^{2}}\)

          44 While this formula will only be exact for differences between separate and uncorrelated (unrelated) characteristics or sub-populations, it is expected to provide a good approximation for all of the differences likely to be of interest in this publication.

          Significance testing

          45 For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula

          \(\Large{\frac{|x-y|}{S E(x-y)}}\)

          46 If the value of the test statistic is greater than 1.96 then it is 95% certain that there is a statistically significant difference between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.

          Interpretation of results

          47 Information presented in this publication is essentially as reported by survey respondents. There may be some error as a consequence of survey respondents reporting information that is in error (whether accidentally or because they are unwilling to report full particulars in some circumstances).

          Related publications

          48 The following publications from the 2006 Time Use Survey are expected to be released in February 2008:

          49 Time Use Survey: User Guide, 2006 (Cat. no. 4150.0)

          50 Time Use Survey, Australia, Confidentialised Unit Record File, 2006 (Cat. no. 4152.0.55.001)

          51 Further information on the ABS and its products and services is available on the ABS website.

          Appendix - comparison of the nature, purpose and concordance classification

          Show all

          Glossary

          Show all

          Abbreviations

          The following symbols and abbreviations are used in this publication:

          Show all

          Back to top of the page