National Survey of Mental Health and Wellbeing: Summary of Results methodology

This is not the latest release View the latest release
Reference period
2007
Released
23/10/2008

Explanatory notes

Introduction

1 This publication presents a summary of results from the National Survey of Mental Health and Wellbeing (SMHWB), which was conducted throughout Australia from August to December 2007. This is the second mental health and wellbeing survey, with the previous survey conducted in 1997. Funding for this survey was provided by the Australian Government Department of Health and Ageing (DoHA).

2 The survey was based on a widely-used international survey instrument, developed by the World Health Organization (WHO) for use by participants in the World Mental Health Survey Initiative. The Initiative is a global study aimed at monitoring mental and addictive disorders. It aims to collect accurate information about the prevalence of mental, substance use and behavioural disorders. It measures the severity of these disorders and helps to determine the burden on families, carers and the community. It also assesses who is treated, who remains untreated and the barriers to treatment. The survey has been run in 32 countries, representing all regions of the world.

3 The survey used the World Mental Health Survey Initiative version of the World Health Organization's Composite International Diagnostic Interview, version 3.0 (WMH-CIDI 3.0). While most of the survey was based on the international survey modules, some modules, such as Health Service Utilisation, have been tailored to fit the Australian context. The adapted modules have been designed in consultation with subject matter experts from academic institutions and staff from the Mental Health Reform Branch of DoHA. Where possible, adapted modules used existing ABS questions. Extensive testing was conducted by the ABS to ensure that the survey would collect objective and high quality data.

4 Due to the high level of sensitivity of the survey's content, this survey was conducted on a voluntary basis.

5 The 2007 SMHWB collected information about:

  • lifetime and 12-month prevalence of selected mental disorders;
  • level of impairment for these disorders;
  • physical conditions;
  • health services used for mental health problems, such as consultations with health practitioners or visits to hospital;
  • social networks and caregiving; and
  • demographic and socio-economic characteristics.


6 A full list of the data items from the 2007 SMHWB will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008. The Users' Guide will also assist with evaluation and interpretation of the survey results.

Scope and coverage of the survey

7 The scope of the survey is people aged 16-85 years, who were usual residents of private dwellings in Australia, excluding very remote areas. Private dwellings are houses, flats, home units and any other structures used as private places of residence at the time of the survey. People usually resident in non-private dwellings, such as hotels, motels, hostels, hospitals, nursing homes, and short-stay caravan parks were not in scope. Usual residents are those who usually live in a particular dwelling and regard it as their own or main home.

8 Scope inclusions:

  • Members of the Australian permanent defence forces; and
  • Overseas visitors who have been working or studying in Australia for the 12 months or more prior to the survey interview, or intended to do so.


9 Scope exclusions:

  • Non-Australian diplomats, non-Australian diplomatic staff and non-Australian members of their household;
  • Members of non-Australian defence forces stationed in Australia and their dependents; and
  • Overseas visitors (except for those mentioned in paragraph 8).


10 Proxy and foreign language interviews were not conducted. Therefore, people who were unable to answer for themselves were not included in the survey coverage but are represented in statistical outputs through inclusion in population benchmarks used for weighting.

11 The projected Australian adult resident population aged 16 years and over, as at 31 October 2007 (excluding people living in non-private dwellings and very remote areas of Australia), was 16,213,900, of which, 16,015,300 were aged 16-85 years.

12 Population benchmarks are projections of the most recently released quarterly Estimated Resident Population (ERP) data, in this case, 30 June 2007. For information on the methodology used to produce the ERP see Australian Demographic Statistics Quarterly (cat. no. 3101.0). To create the population benchmarks for the 2007 SMHWB, the most recently released quarterly ERP estimates were projected forward two quarters past the period for which they were required. The projection was based on the historical pattern of each population component - births, deaths, interstate migration and overseas migration. By projecting two quarters past that needed for the current population benchmarks, demographic changes are smoothed in, thereby making them less noticeable in the population benchmarks.

Sample design

13 The 2007 SMHWB was designed to provide reliable estimates at the national level. The survey was not designed to provide state/territory level data, however, some data may be available (on request) for the states with larger populations, eg New South Wales. Users should exercise caution when using estimates at this level due to high sampling errors. RSEs for all estimates in this publication are available free-of-charge on the ABS website www.abs.gov.au, released in spreadsheet format as an attachment to this publication. As a guide, the population and RSE estimates for Table 2 have also been included in the Technical Note.

14 Dwellings included in the survey in each state and territory were selected at random using a stratified, multistage area sample. This sample included only private dwellings from the geographic areas covered by the survey. Sample was allocated to states and territories roughly in proportion to their respective population size. The expected number of fully-responding households was 11,000.

15 To improve the reliability of estimates for younger (16-24 years) and older (65-85 years) persons, these age groups were given a higher chance of selection in the household person selection process. That is, if you were a household member within the younger or older age group, you were more likely to be selected for interview than other household members.

16 There were 17,352 private dwellings initially selected for the survey. This sample was expected to deliver the desired fully-responding sample, based on an expected response rate of 75% and sample loss. The sample was reduced to 14,805 dwellings due to the loss of households with no residents in scope for the survey and where dwellings proved to be vacant, under construction or derelict.

17 Of the eligible dwellings selected, there were 8,841 fully-responding households, representing a 60% response rate at the national level. Interviews took, on average, around 90 minutes to complete.

18 Some survey respondents provided most of the required information, but were unable or unwilling to provide a response to certain data items. The records for these persons were retained in the sample and the missing values were recorded as 'don't know' or 'not stated'. No attempt was made to deduce or impute for these missing values.

19 Due to the lower than expected response rate, the ABS undertook extensive non-response analyses as part of the validation and estimation process. A Non-Response Follow-Up Study (NRFUS) was conducted from January to February 2008. The aim of the NRFUS was to provide a qualitative assessment of the likelihood of non-response bias associated with the 2007 SMHWB estimates.

20 The Non-Response Follow-Up Study (NRFUS) consisted of a sample of non-respondents from the 2007 SMHWB in Sydney and Perth and was based on reduced survey content. It had a response rate of 39%, yielding information on 151 non-respondents. Further information on the non-response analyses is provided in paragraphs 63-72.

Data collection

21 A group of ABS officers were trained in the use of the Composite International Diagnostic Interview (CIDI) by staff from the CIDI Training and Reference Center, University of Michigan. These officers then provided training to experienced ABS interviewers, as part of a comprehensive four-day training program, which also included sensitivity training and field procedures.

22 Trained ABS interviewers conducted personal interviews at selected private dwellings from August to December 2007. Interviews were conducted using a Computer-Assisted Interviewing (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews.

23 One person in the household, aged 18 years or over, was selected to provide basic information, such as age and sex, for all household members. This person, or an elected household spokesperson, also answered some financial and housing items, such as income and tenure, on behalf of other household members.

24 Once basic details had been recorded for all in-scope household members, one person aged 16-85 years was randomly selected to complete a personal interview. Younger and older persons were given a higher chance of selection. See paragraph 15 and paragraph 50 for more information.

Survey content

25 Broadly, the 2007 SMHWB collected information on: selected mental disorders; the use of health services and medication for mental health problems; physical conditions; disability; social networks and caregiving; demographic; and socio-economic characteristics.

26 A Survey Reference Group, comprising experts and key stakeholders in the field of mental health, provided the ABS with advice on the survey content, including the most appropriate topics for collection, and associated concepts and definitions. They also provided advice on issues that arose during field tests and the most suitable survey outputs. Group members included representatives from government departments, universities, health research organisations, carers organisations and consumer groups.

Selected mental disorders

27 The 2007 SMHWB collected information on selected mental disorders, which were considered to have the highest rates of prevalence in the population and that were able to be identified in an interviewer based household survey. These mental disorders were:

  • Anxiety disorders
    • Panic Disorder
    • Agoraphobia
    • Social Phobia
    • Generalised Anxiety Disorder (GAD)
    • Obsessive-Compulsive Disorder (OCD)
    • Post-Traumatic Stress Disorder (PTSD)
       
  • Affective (mood) disorders
    • Depressive Episode
    • Dysthymia
    • Bipolar Affective Disorder
       
  • Substance Use disorders
    • Alcohol Harmful Use
    • Alcohol Dependence
    • Drug Use Disorders
       

Composite International Diagnostic Interview (CIDI)

28 Measuring mental health in the community through household surveys is complex, as mental disorders are usually determined through detailed clinical assessment. To estimate the prevalence of specific mental disorders, the 2007 National Survey of Mental Health and Wellbeing used the World Mental Health Survey Initiative version of the World Health Organization's Composite International Diagnostic Interview, version 3.0 (WMH-CIDI 3.0). The WMH-CIDI 3.0 was chosen because it:

  • provides a fully structured diagnostic interview;
  • can be administered by lay interviewers;
  • is widely used in epidemiological surveys;
  • is supported by the World Health Organization (WHO); and
  • provides comparability with similar surveys conducted worldwide.
     

29 The WMH-CIDI 3.0 provides an assessment of mental disorders based on the definitions and criteria of two classification systems: the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV); and the WHO International Classification of Diseases, Tenth Revision (ICD-10). Each classification system lists sets of criteria that are necessary for diagnosis. The criteria specify the nature and number of symptoms required; the level of distress or impairment required; and the exclusion of cases where symptoms can be directly attributed to general medical conditions, such as a physical injury, or to substances, such as alcohol.

30 The 2007 SMHWB was designed to provide lifetime prevalence estimates for mental disorders. Respondents were asked about experiences throughout their lifetime. In this survey, 12-month diagnoses were derived based on lifetime diagnosis and the presence of symptoms of that disorder in the 12 months prior to the survey interview. The full diagnostic criteria were not assessed within the 12 month time-frame. This differs from the 1997 survey where diagnostic criteria were assessed solely on respondents' experiences in the 12 months prior to the survey interview. More information on the comparison between the 1997 and 2007 surveys is provided in Appendix 2.

31 Diagnostic algorithms are specified in accordance with the DSM-IV and ICD-10 classification systems. As not all modules contained in the WMH-CIDI 3.0 were operationalised for the 2007 SMHWB, it was necessary to tailor the diagnostic algorithms to fit the Australian context. Data in this publication are presented using the ICD-10 classification system. Prevalence rates are presented with hierarchy rules applied, for more information see paragraphs 34-37. More information on the WMH-CIDI 3.0 diagnostic assessment criteria according to the ICD-10 is provided in Appendix 1.

32 A screener was introduced to the WMH-CIDI 3.0 to try to alleviate the effects of learned responses, such as providing a particular response to avoid further questions. The module included a series of introductory questions about the respondent's general health, followed by diagnostic screening questions for the primary disorders assessed in the survey, eg Depressive Episode. This screening method has been shown to increase the accuracy of diagnostic assessments, by reducing the effects of learned responses due to respondent fatigue. Other non-core disorders, such as Obsessive-Compulsive Disorder (OCD), were screened at the beginning of the individual module.

33 The WMH-CIDI 3.0 was also used to collect information on:

  • the onset of symptoms and mental disorders;
  • the courses of mental disorders, that is, the varying degrees to which the symptoms of mental disorders present themselves, including: episodic (eg depression), clusters of attacks (eg panic disorder), and fairly persistent dispositions (eg phobias);
  • the impact of mental disorders on home management, work life, relationships and social life; and
  • treatment seeking and access to helpful treatment.
     

Hierarchy rules

34 The classification system for some of the ICD-10 disorders contain diagnostic exclusion rules so that a person, despite having symptoms that meet diagnostic criteria, will not meet criteria for particular disorders because the symptoms are believed to be accounted for by the presence of another disorder. In these cases, one disorder takes precedence over another. These exclusion rules are built into the diagnostic algorithms.

35 The developers of WMH-CIDI 3.0 established two versions of the diagnoses in the algorithms for a number of the mental disorders: a 'with hierarchy' version and a 'without hierarchy' version. The 'with hierarchy' version specifies the full diagnostic criteria consistent with the ICD-10 classification system (ie the exclusion criteria are enforced). The 'without hierarchy' version applies all diagnostic criteria except the criteria specifying the hierarchical relationship with other disorders. More information on the WMH-CIDI 3.0 diagnostic assessment criteria according to the ICD-10 is provided in Appendix 1.

36 One example of a disorder specified with and without hierarchy is Alcohol Harmful Use. ICD-10 states that in order for diagnostic criteria for Harmful Use to be met, criteria cannot be met for Dependence on the same substance during the same time period. Therefore, the ‘with hierarchy’ version of Alcohol Harmful Use will exclude cases where Alcohol Dependence has been established for the same time period. The ‘without hierarchy’ version includes all cases of Alcohol Harmful Use regardless of coexisting Alcohol Dependence. Note that a person can meet criteria for Alcohol Dependence and the hierarchical version of Alcohol Harmful Use if there is no overlap in time between the two disorders.

37 Throughout this publication, the ICD-10 prevalence rates are presented with the hierarchy rules applied, except for the comorbidity data, which are presented without hierarchy. The ICD-10 disorders specified with and without hierarchy in this publication are: Generalised Anxiety Disorder; Hypomania; Mild, Moderate and Severe Depressive Episode; Dysthymia; and the Harmful Use of Alcohol, Cannabis, Sedatives, Stimulants and Opioids.

Comorbidity

38 Comorbidity is the co-occurrence of more than one disease and/or disorder in an individual. Mental disorders may co-occur for a variety of reasons, and Substance Use disorders frequently co-occur (CDHAC, 2001). A person with co-occurring diseases or disorders is likely to experience more severe and chronic medical, social and emotional problems than if they had a single disease or disorder. People with comorbid conditions are also more vulnerable to alcohol and drug relapses, and relapse of mental health problems. Higher numbers of disorders are also associated with greater impairment, higher risk of suicidal behaviour and greater use of health services.

39 In this publication, information is presented on both the comorbidity of mental disorder groups and physical conditions (Table 10), and the co-occurrence of more than one mental disorder with physical conditions (Table 11). As people with comorbid disorders generally require higher levels of support than people with only one disorder, Table 13 presents the number of 12-month mental disorders by services used for mental health problems.

40 All comorbidity tables in this publication are presented without the WMH-CIDI 3.0 hierarchy rules applied to provide a more complete picture of the combinations of symptoms and disorders experienced by individuals. For more information on hierarchy rules see paragraphs 34-37 and Appendix 1.

Chronic conditions

41 Questions regarding chronic conditions have been adapted from a module in the WMH-CIDI 3.0, to enable some cross-country comparisons of physical conditions. The resulting module comprised: a checklist of the National Health Priority Area physical conditions, such as asthma, heart condition and diabetes and the presence of a restricted set of physical conditions only if they had lasted for six months or more (for a complete list refer to Physical conditions in the Glossary). The module also included: questions on whether the conditions occurred in the 12 months prior to the survey interview and the age of onset of these conditions; a standard set of ABS questions on role impairment (ABS disability module); and questions to determine hypochondriasis/somatisation.

42 Respondents were asked a series of questions relating to health risk factors, specifically those related to lifestyle behaviours. The 2007 SMHWB collected information on smoking, level of exercise, and self-reported height and weight measurements to calculate a Body Mass Index (BMI). This was the first time that questions on physical activity and body mass were included in the SMHWB.

The Kessler Psychological Distress Scale (K10)

43 The Kessler Psychological Distress Scale (K10) is a widely used screening instrument, which gives a simple measure of psychological distress. It is not a diagnostic tool, but is an indicator of psychological distress. The K10 is based on a person's emotional state during the 30 days prior to the survey interview. Respondents were asked a series of 10 questions and for each item, they provided a five-level response scale, based on the amount of time they reported experiencing the particular problem. The response scale of '1 to 5' corresponds to a scale that ranges from 'none of the time' to 'all of the time'. Scores for the 10 questions are put together, with a minimum possible score of 10 and a maximum possible score of 50. Low scores indicate low levels of psychological distress and high scores indicate high levels of psychological distress.

Functioning

44 A series of measures were used to determine the extent to which health problems affected the respondent's life and activities during the 30 days prior to the survey. This module included questions from the World Health Organization's Disability Assessment Schedule (WHODAS) and the (Australian) Assessment of Quality of Life (AQoL) instrument. Two questions on 30-day functioning ('Days out of role'), from the 1997 SMHWB were also included in this module.

Health service utilisation

45 Respondents were asked about their health service utilisation for mental health problems and/or physical conditions. Health service utilisation covered admissions to hospital and consultations with a range of health professionals. Respondents were also asked about the number and length of admissions to hospital; the number of consultations with health professionals for mental health problems; and the method of payment for consultations.

46 Further information on the survey modules will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008.

Data processing

47 A combination of clerical and computer-based systems were used to process data from the 2007 SMHWB. The content of the data file was checked to identify unusual values which may have significantly altered estimates and also to assess illogical relationships not previously identified by edits. Where necessary, the ABS sought the advice of subject matter experts from academic institutions in order to determine the appropriate treatment.

48 The survey contained a number of open-ended questions, for which there were no predetermined responses. These responses were office coded. Some of the open-ended questions formed part of the assessment to determine whether a respondent met the criteria for diagnosis of a mental health disorder. These open-ended questions were designed to probe causes of a particular episode or symptom. Responses were then used to eliminate cases where there was a clear physical cause. As part of the processing procedures set out for the WMH-CIDI 3.0, responses provided to the open-ended questions are required to be interpreted by a suitably qualified person. The technical assistance for coding of the open-ended diagnostic-related questions for the 2007 SMHWB was provided by the University of New South Wales. Further information on data processing will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008.

Weighting, benchmarking and estimation

Weighting

49 Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each sample unit corresponding to the level at which population statistics are produced, eg household and person level. The weight can be considered an indication of how many population units are represented by the sample unit. For the 2007 SMHWB, separate person and household weights were developed.

Selection weights

50 The first step in calculating weights for each person or household is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. For the 2007 SMHWB, due to the length of the interview, only one in-scope person was selected per household. Thus the initial person weight was derived from the initial household weight according to the total number of in-scope persons in the household and the differential probability of selection by age used to obtain more younger (16-24 years) and older (65-85 years) persons in the sample.

51 Apart from the 8,841 fully-responding households, basic information was obtained from the survey's household form for an additional 1,476 households and their occupants. This information was provided by a household member aged 18 years or over. In the case of these 1,476 households, the selected person did not complete the main questionnaire (eg they were unavailable or refused to participate). The information provided by these additional 1,476 households was analysed to determine if an adjustment to initial selection weights could be made as a means of correcting for non-response. However, no explicit adjustment was made to the weighting due to the negligible impact on survey estimates.

Benchmarking

52 The person and household weights were separately calibrated to independent estimates of the population of interest, referred to as 'benchmarks'. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over- or under-enumeration of particular categories which may occur due to either the random nature of sampling or non-response. This process can reduce the sampling error of estimates and may reduce the level of non-response bias.

53 A standard approach in ABS household surveys is to calibrate to population benchmarks by state, part of state, age and sex. In terms of the effectiveness of 'correcting' for potential non-response bias, it is assumed that the characteristics being measured by the survey for the responding population are similar to the non-responding population within weighting classes, as determined by the benchmarking strategy. Where this assumption does not hold, biased estimates may result.

54 Given the relatively low response rate for the 2007 SMHWB, extensive analysis was done to ascertain whether further benchmark variables, in addition to geography, age, and sex, should be incorporated into the weighting strategy. Analysis showed that the standard weighting approach did not adequately compensate for differential undercoverage in the 2007 SMHWB sample for variables such as educational attainment, household composition, and labour force status, when compared to other ABS surveys and the 2006 Census of Population and Housing. As these variables were considered to have possible association with mental health characteristics, additional benchmarks were incorporated into the weighting strategy.

55 Initial person weights were simultaneously calibrated to the following population benchmarks:

  • state by part of state by age by sex; and
  • state by household composition; and
  • state by educational attainment; and
  • state by labour force status.


56 The state by part of state by age and sex benchmarks were obtained from demographic projections of the resident population, aged 16-85 years who were living in private dwellings, excluding very remote areas of Australia, at 31 October 2007. The projected resident population was based on the 2006 Census of Population and Housing using 30 June 2007 as the latest available Estimated Resident Population base. Therefore, the SMHWB estimates do not (and are not intended to) match estimates for the total Australian resident population (which include persons and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.

57 The remaining benchmarks were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The 2007 Survey of Education and Work (persons aged 16-64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (September to December 2007) provided the pseudo-benchmark for labour force status, as well as the resident population living in households by household composition. The pseudo-benchmarks were aligned to the projected resident population aged 16-85 years, who were living in private dwellings in each state and territory, excluding very remote areas of Australia, at 31 October 2007. The pseudo-benchmark of household composition was also aligned to the projected household composition population counts of households. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.

58 Household weights were derived by separately calibrating initial household selection weights to the projected household composition population counts of households containing persons aged 16-85 years, who were living in private dwellings in each state and territory, excluding very remote areas of Australia, at 31 October 2007.

Estimation

59 Estimates of counts of persons are obtained by summing person weights of persons with the characteristic of interest. Similarly, household estimates are produced using household level weights. The majority of estimates contained in this publication are based on benchmarked person weights.

60 Further information on weighting, benchmarking and estimation will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008.

Reliability of estimates

61 All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error. Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error may occur in any data collection, whether it is based on a sample or a full count (eg Census). Non-sampling error may occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors in coding or processing survey data.

Sampling error

62 Sampling error is the expected random difference that could occur between the published estimates, derived from using a sample of persons, and the value that would have been produced if all persons in scope of the survey had been enumerated. A measure of the sampling error for a given sample estimate is provided by the standard error, which may be expressed as a percentage of the estimate (relative standard error). For more information refer to the Technical Note. In this publication estimates with relative standard errors (RSEs) of 25% to 50% are preceded by an asterisk (eg *3.4) to indicate that the estimate should be used with caution. Estimates with RSEs over 50% are indicated by a double asterisk (eg **0.6) and should be considered unreliable for most purposes.

Non-response and non-sampling error

63 Non-response may occur when people cannot or will not cooperate, or cannot be contacted. Unit and item non-response by persons/households selected in the survey can affect both sampling and non-sampling error. The loss of information on persons and/or households (unit non-response) and on particular questions (item non-response) reduces the effective sample and increases sampling error.

64 Non-response can also introduce non-sampling error by creating a biased sample. The magnitude of any non-response bias depends upon the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not within population subgroups as determined by the weighting strategy. See paragraphs 49-58.

65 To reduce the level and impact of non-response, the following methods were adopted in this survey:

  • face-to-face interviews with respondents;
  • follow-up of respondents if there was initially no response;
  • ensuring the weighted file is representative of the population by aligning the estimates with population benchmarks;
  • use of pseudo-benchmarks for educational attainment, labour force status and household composition; and
  • a Non-Response Follow-Up Study (NRFUS) was conducted to gain qualitative assessment of possible bias.


66 Every effort was made to minimise other non-sampling error by careful design and testing of questionnaires, intensive training of interviewers, and extensive editing and quality control procedures at all stages of data processing.

67 An advantage of the Computer-Assisted Interview (CAI) used for this survey is that it potentially reduces non-sampling errors by enabling edits to be applied as the data are being collected. These edits allow the interviewer to query respondents and resolve issues during the interview. Sequencing of questions is also automated so that respondents are asked only relevant questions and only in the appropriate sequence, eliminating interviewer sequencing errors.

68 Of the eligible dwellings selected in the 2007 SMHWB, 5,851 (40%) did not respond fully or adequately. Reflecting the sensitive topic for the survey, the average expected interview length (of around 90 minutes) combined with the voluntary nature of the survey, almost two-thirds (61%) of these dwellings were full refusals. Household details were provided by more than a quarter (27%) of these dwellings, but then the selected person did not complete the main questionnaire. The remainder of these dwellings (12%) provided partial or incomplete information. As the level of non-response for this survey was significant, extensive non-response analyses to assess the reliability of the survey estimates were undertaken.

69 A purposive small sample/short-form intensive Non-Response Follow-Up Study (NRFUS) was developed for use with non-respondents in Sydney and Perth. The NRFUS was conducted from January to February 2008 and achieved a response rate of 39%. It used a short-form questionnaire containing demographic questions and the Kessler Psychological Distress Scale (K10). The short-form approach used for the NRFUS precluded the use of the full diagnostic assessment modules. As a minor proxy of the mental health questions, the K10 was included for qualitative assessment against the 2007 SMHWB. The aim of the NRFUS was to provide a qualitative assessment of the likelihood of non-response bias. Respondents to the NRFUS were compared to people who responded fully to the 2007 SMHWB by a number of demographic variables, such as age, sex and marital status. The analysis undertaken suggests that there may be differences in the direction and magnitude of potential non-response bias between various geographical, age and sex domains that the weighting strategy does not correct for. The magnitude of potential non-response bias appears to be small at the aggregate level. The results of the study suggest there is possible underestimation in the prevalence of mental health conditions in Perth, for men, and for young persons. However, given the small size and purposive nature of the NRFUS sample, the results of the study were not explicitly incorporated into the 2007 SMHWB weighting strategy.

70 Analysis was also undertaken to compare the characteristics of respondents to the 2007 SMHWB with a number of ABS collections, including: the 2006 Census of Population and Housing, 2004-05 National Health Survey, 2007 Survey of Education and Work and the monthly Labour Force Survey, to ascertain data consistency. From this analysis, it was determined that some of the demographic and socio-economic characteristics from the initial weighted data did not align with other ABS estimates. These additional (or 'pseudo') benchmarks were used to adjust for differential undercoverage of educational attainment, labour force status and household composition. See paragraphs 52-58.

71 Categorisation of interviewer remarks from the NRFUS and the 2007 SMHWB indicated that the majority of persons who refused stated that they were 'too busy' or 'not interested' in participating in the survey.

72 Further details of the non-response analysis will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008.

Seasonal effects

73 The estimates in this publication are based on information collected from August to December 2007, and due to seasonal effects they may not be fully representative of other time periods in the year. Therefore, the results could have differed if the survey had been conducted over the whole year or in a different part of the year.

Interpretation of results

74 Care has been taken to ensure that the results of this survey are as accurate as possible. All interviews were conducted by trained ABS officers. Extensive reference material was developed for use and intensive training was provided to interviewers. There remain, however, other factors which may have affected the reliability of results, and for which no specific adjustments can be made. The following factors should be considered when interpreting these estimates:

  • Information recorded in this survey is 'as reported' by respondents, and therefore may differ from information available from other sources or collected using different methodologies. Responses may be affected by imperfect recall or individual interpretation of survey questions.
  • Some respondents may have provided responses that they felt were expected, rather than those that accurately reflected their own situation. Every effort has been made to minimise such bias through the development and use of culturally appropriate survey methodology.


75 For a number of survey data items, some respondents were unwilling or unable to provide the required information. Where responses for a particular data item were missing for a person or household they were recorded in a 'not known' or 'not stated' category for that data item. These 'not known' or 'not stated' categories are not explicitly shown in the publication tables, but have been included in the totals. Publication tables presenting proportions have included any 'not known' or 'not stated' categories in the calculation of these proportions.

76 The employment component of this survey is based on a reduced set of questions from the ABS monthly Labour Force Survey.

77 In terms of physical conditions, reported information was not medically verified, and was not necessarily based on diagnoses by a medical practitioner.

78 In terms of mental disorders, the WMH-CIDI 3.0 makes diagnoses against specific criteria. It has no facility for subjective interpretation. Therefore, it cannot always replicate diagnoses made by a health professional. Symptoms which have a considerable effect on people are likely to be better reported than those which have little effect.

79 The results of previous surveys on alcohol and illegal drug consumption suggest a tendency for respondents to under-report actual consumption levels.

80 The primary focus of the diagnostic modules is on the assessment of a lifetime mental disorder. This is based on the time when the respondent had the most symptoms or the worst period of this type. Where a number of symptoms have been endorsed across a lifetime, the respondent is asked about the presence of symptoms in the 12 months prior to the survey interview. To be included in the 12-month prevalence rates in the 2007 SMHWB, people must have met the criteria for lifetime diagnosis and had symptoms in the 12 months prior to interview. This differs from the 1997 SMHWB, where the diagnostic criteria were assessed solely on respondents' experiences in the 12 months prior to the survey interview.

81 The inclusion of lifetime diagnosis in the 2007 SMHWB may have led to higher prevalence of 12-month mental disorders compared to the 1997 survey. In the 2007 survey, people may have met the criteria for lifetime diagnosis and had symptoms in the 12 months prior to interview. However, they may not have met full diagnostic criteria within the 12-month time-frame, as was required in the 1997 survey. A number of other issues also need to be considered when comparing prevalence rates between the two surveys. For more information on comparability see paragraphs 85-95.

82 The exclusion of residents in special dwellings (eg hotels, boarding houses and institutions) and homeless people will have affected the results. It is therefore likely that the survey underestimates the prevalence of mental disorders in the Australian population.

83 Due to the higher than expected non-response rate, extensive analysis has been conducted to measure the reliability of the survey estimates. The Non-Response Follow-Up Survey (NRFUS) provided some qualitative analysis on the possible differing characteristics of fully-responding and non-responding persons. As non-response bias can impact on population characteristics, as well as across data items, users should exercise caution. More information on non-response is provided in paragraphs 63-72.

84 More information on interpreting the survey will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008.

Comparability with the 1997 survey

85 In 1997 the ABS conducted the National Survey of Mental Health and Wellbeing of Adults. The survey provided information on the prevalence of selected 12-month mental disorders, the level of disability associated with those disorders, health services used, and perceived need for help with a mental health problem for Australians aged 18 years and over. The survey was an initiative of, and was funded by, the then Commonwealth Department of Health and Family Services, as part of the National Mental Health Strategy. A key aim of the 1997 survey was to provide prevalence estimates for mental disorders in a 12 month time-frame. Therefore, diagnostic criteria were assessed solely on respondents' experiences in the 12 months prior to the survey interview.

86 The 2007 survey was designed to provide lifetime prevalence estimates for mental disorders. Respondents were asked about experiences throughout their lifetime. In the 2007 survey, 12-month diagnoses were derived based on lifetime diagnosis and the presence of symptoms of that disorder in the 12 months prior to the survey interview. The full diagnostic criteria were not assessed within the 12 month time-frame. Users should exercise caution when comparing data from the two surveys.

87 The diagnoses of mental disorders for the 2007 SMHWB are based on the WMH-CIDI 3.0, while the 1997 SMHWB diagnoses were based on an earlier version of the CIDI (version 2.1). Apart from the differences in time-frames, the WMH-CIDI 3.0 differs from earlier versions as it has a number of expanded modules, incorporates changes to diagnostic algorithms and sequencing, and utilises a diagnostic 'screener'. For example, the number of questions asked about scenarios which may have triggered a Post-Traumatic Stress Disorder (PTSD) has increased substantially, from 10 questions in 1997 to 28 questions in 2007. Additionally, in 1997 respondents were excluded if they said their extremely stressful or upsetting event was only related to bereavement, chronic illness, business loss, marital or family conflict, a book, movie or television show. A summary of the differences between the two surveys is provided in Appendix 2. The WMH-CIDI 3.0 diagnostic assessment criteria according to the ICD-10 for the 2007 SMHWB are provided in Appendix 1. For more information on the WMH-CIDI 3.0 visit the World Mental Health website http://www.hcp.med.harvard.edu/wmh/.

88 In this survey publication, the ICD-10 prevalence rates are presented with the hierarchy rules applied, except for the comorbidity data, which are presented without hierarchy. This varies from how the comorbidity data was presented in the 1997. All data in the 1997 survey publication were presented with the hierarchy rules applied. For more information on hierarchy rules and comorbidity see paragraphs 34-40.

89 Both surveys collected information from persons in private dwellings throughout Australia. The 2007 SMHWB collected information from people aged 16-85 years, while the 1997 SMHWB collected information on people aged 18 years and over. For more information on scope and sample design refer to paragraphs 7-20.

90 The enumeration period of each survey differs, which may impact on data comparisons. The 2007 SMHWB was undertaken from August to December, while the 1997 survey was undertaken from May to August. See seasonal effects in paragraph 73.

91 The classification of several demographic and socio-economic characteristics used in the 2007 SMHWB differ to those used in 1997, including: education, occupation, languages spoken and geography. Industry of employment was collected for the first time in 2007. See classifications in paragraphs 96-102.

92 Several of the scales and measures used to estimate disability and functioning in the 2007 SMHWB differ from those used in 1997. The 2007 survey includes a standard set of ABS questions on role impairment (ABS Short Disability Module), the World Health Organization Disability Assessment Schedule (WHODAS) and the Australian Assessment of Quality of Life (AQoL). In comparison, the 1997 survey collected information on disability and functioning using the Brief Disability Questionnaire, the Short-Form 12 and the General Health Questionnaire (GHQ-12). Both surveys contained questions on physical health, health related risk factors and 'days out of role'. However, the positioning of questions within each survey and the wording of questions varies. Information on physical activity and body mass were collected for the first time in 2007. The 2007 survey included a small number of questions on hypochondriasis and somatisation, whereas the 1997 survey assessed somatic disorder, neurasthenia, and the personality characteristic neuroticism (Eysenck Personality Questionnaire). Both surveys included the Kessler Psychological Distress Scale (K10).

93 As information on medications, social networks, caregiving, sexual orientation, homelessness and incarceration was collected for the first time in 2007 there are no data from the 1997 survey for comparison.

94 Standardisation is a technique used when comparing estimates for populations which have different structures. The 1997 SMHWB publication included data that had been age standardised. This technique was not administered in 2007, as age standardisation is no longer considered appropriate where there is a complex relationship between the variable of interest and age for the comparison populations. 

95 A list of the differences between the data items collected in the two surveys is provided in Appendix 2. Further detailed information will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0), planned for release on the ABS website www.abs.gov.au in late 2008.

Classifications

96 Country of birth data were classified according to the Standard Australian Classification of Countries (SACC), 1998 (cat. no. 1269.0).

97 Educational attainment data were classified according to Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0).

98 Geography data were classified according to the Australian Standard Geographical Classification (ASGC), July 2007 (cat. no. 1216.0).

99 Languages spoken were coded utilising the Australian Standard Classification of Languages (ASCL), 2005-06 (cat. no. 1267.0).

100 Industry data were classified to the Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (cat. no. 1292.0).

101 Occupation data were classified to the Australian and New Zealand Standard Classification of Occupations (ANZSCO), First Edition, 2006 (cat. no. 1220.0).

102 Pharmaceutical medications reported by respondents were classified by generic type. The classification used was developed by the ABS for the National Health Survey and is based on the World Health Organization's (WHO) Anatomical Therapeutic Chemical Classification and the framework underlying the listing of medications in the Australian Medicines Handbook.

Products and services

103 For users who wish to undertake more detailed analysis of the survey data, two confidentialised unit record files (CURFs) are expected to be available in early 2009. A Basic CURF will be available on CD-ROM, while an Expanded CURF (containing more detailed information than the Basic CURF) will be accessible through the ABS Remote Access Data Laboratory (RADL) system. Further information about these files, including how they can be obtained, and conditions of use, will be available on the ABS website www.abs.gov.au.

104 Summary of the products to be released:

  • National Survey of Mental Health and Wellbeing: Users’ Guide, 2007 (cat. no. 4327.0)
  • Microdata: National Survey of Mental Health and Wellbeing, Basic and Expanded Confidentialised Unit Record Files, 2007 (cat. no. 4326.0.30.001)
  • Technical Manual: National Survey of Mental Health and Wellbeing, Confidentialised Unit Record Files (cat. no. 4329.0)
     

105 Special tabulations are available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey to meet individual requirements. These can be provided in electronic or printed form. A list of data items from this survey will be available in the National Survey of Mental Health and Wellbeing: Users' Guide (cat. no. 4327.0) planned for release on the ABS website  www.abs.gov.au in late 2008. 

106 Further information about the survey and associated products can also be obtained through the National Information and Referral Service, whose contact details are listed at the end of this publication.

Acknowledgements

107 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated: without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

108 The ABS would like to acknowledge the extensive support provided by Dr Tim Slade and Ms Amy Johnston from the University of New South Wales, whose expertise in this subject matter area greatly assisted in the development and dissemination of this survey.

Related publications

109 Current publications and other products released by the ABS are available on the ABS website  www.abs.gov.au. ABS publications which may be of interest are:

Appendix 1 - ICD-10 diagnoses

Show all

Appendix 2 - comparison between 1997 and 2007

Show all

Technical note

Estimation procedures

1 Estimates from the survey were derived using a complex estimation procedure which ensures that survey estimates conform to independent population estimates by state, part of state, age and sex.

Reliability of the estimates

2 Two types of error are possible in an estimate based on a sample survey: sampling error and non-sampling error. The sampling error is a measure of the variability that occurs by chance because a sample, rather than the entire population, is surveyed. Since the estimates in this publication are based on information obtained from occupants of a sample of dwellings they are subject to sampling variability; that is they may differ from the figures that would have been produced if all dwellings had been included in the survey. One measure of the likely difference is given by the standard error (SE). There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about 19 chances in 20 that the difference will be less than two SEs.

3 Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate. The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate.

\(\large{R S E \%=\left(\frac{S E}{e s t i m a t e}\right) \times 100}\)

4 Space does not allow for the separate presentation of the SEs and/or RSEs of all the estimates in this publication. However, RSEs for all estimates are available free-of-charge on the ABS website www.abs.gov.au, released in spreadsheet format as an attachment to this publication, National Survey of Mental Health and Wellbeing: Summary of Results (cat. no. 4326.0). As a guide, the population and RSE estimates for Table 2 are presented on the following page.

12-month mental disorders(a), relative standard error estimates
MalesFemalesPersons
'000RSE %'000RSE %'000RSE %
Any 12-month mental disorder
Anxiety disorders
Panic Disorder180.515.6229.810.9410.39.3
Agoraphobia170.517.4279.99.6450.48.3
Social Phobia298.913.4461.07.0759.96.2
Generalised Anxiety Disorder155.218.1280.911.6436.110.5
Obsessive-Compulsive Disorder130.617.6175.011.0305.610.3
Post-Traumatic Stress Disorder366.310.6665.76.31,031.95.0
Any Anxiety disorder(b)860.76.71,442.33.82,303.03.3
Affective disorders
Depressive Episode(c)245.013.7407.48.2652.47.2
Dysthymia79.721.4124.015.9203.812.0
Bipolar Affective Disorder145.317.3140.313.2285.610.8
Any Affective disorder(b)420.19.8575.87.1995.95.5
Substance Use disorders
Alcohol Harmful Use300.810.5169.315.6470.18.2
Alcohol Dependence174.915.755.318.5230.212.2
Drug Use disorders(d)165.713.565.716.8231.410.0
Any Substance Use disorder(b)556.48.8263.510.7819.86.5
Any 12-month mental disorder(a)(b)1,400.15.51,797.72.93,197.82.7
No 12-month mental disorder(e)6,549.71.26,267.80.812,817.50.7
Total persons aged 16-85 years7,949.8-8,065.5-16,015.3-

-      nil or rounded to zero (including null cells)

  1. Persons who met criteria for diagnosis of a lifetime mental disorder (with hierarchy) and had symptoms in the 12 months prior to interview. See paragraphs 30-31 of Explanatory Notes.
  2. A person may have had more than one 12-month mental disorder. The components when added may therefore not add to the total shown.
  3. Includes Severe Depressive Episode, Moderate Depressive Episode, and Mild Depressive Episode.
  4. Includes Harmful Use and Dependence.
  5. Persons who did not meet criteria for diagnosis of a lifetime mental disorder and those who met criteria for diagnosis of a lifetime mental disorder (with hierarchy) but did not have symptoms in the 12 months prior to interview. See paragraphs 30-31 of Explanatory Notes.


5 The smaller the estimate the higher is the RSE. Very small estimates are subject to such high SEs (relative to the size of the estimate) as to detract seriously from their value for most reasonable uses. In the tables in this publication, only estimates with RSEs less than 25% are considered sufficiently reliable for most purposes. However, estimates with larger RSEs, between 25% and less than 50% have been included and are preceded by an asterisk (eg *3.4) to indicate they are subject to high SEs and should be used with caution. Estimates with RSEs of 50% or more are preceded with a double asterisk (eg **0.6). Such estimates are considered unreliable for most purposes.

6 The imprecision due to sampling variability, which is measured by the SE, should not be confused with inaccuracies that may occur because of imperfections in reporting by interviewers and respondents and errors made in coding and processing of data. Inaccuracies of this kind are referred to as the non-sampling error, and they may occur in any enumeration, whether it be in a full count or only a sample. In practice, the potential for non-sampling error adds to the uncertainty of the estimates caused by sampling variability. However, it is not possible to quantify the non-sampling error.

Standard errors of proportions and percentages

7 Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. For proportions where the denominator is an estimate of the number of persons in a group and the numerator is the number of persons in a sub-group of the denominator group, the formula to approximate the RSE is given by:

\(\large{R S E(x-y) = \sqrt{([RS E(x)]^{2}-[RSE(y)]^{2})}}\)

8 From the above formula, the RSE of the estimated proportion or percentage will be lower than the RSE of the estimate of the numerator. Thus an approximation for SEs of proportions or percentages may be derived by neglecting the RSE of the denominator, ie by obtaining the RSE of the number of persons corresponding to the numerator of the proportion or percentage and then applying this figure to the estimated proportion or percentage.

Comparison of estimates

9 Published estimates may also be used to calculate the difference between two survey estimates. Such an estimate is subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

\(\large{S E(x-y) = \sqrt{([S E(x)]^{2}+[SE(y)]^{2})}}\)

10 While the above formula will be exact only for differences between separate and uncorrelated (unrelated) characteristics of sub-populations, it is expected that it will provide a reasonable approximation for all differences likely to be of interest in this publication.

Significance testing

11 For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:

\(\Large{\frac{|x-y|}{SE(x-v)}}\)

12 If the value of the statistic is greater than 1.96 then we may say there is good evidence of a statistically significant difference between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.

13 The imprecision due to sampling variability, which is measured by the SE, should not be confused with inaccuracies that may occur because of imperfections in reporting by respondents and recording by interviewers, and errors made in coding and processing data. Inaccuracies of this kind are referred to as non-sampling error, and they occur in any enumeration, whether it be a full count or sample. Every effort is made to reduce non-sampling error to a minimum by careful design of questionnaires, intensive training and supervision of interviewers, and efficient operating procedures.

Glossary

Show all

Quality declaration

Institutional environment

Relevance

Timeliness

Accuracy

Coherence

Interpretability

Accessibility

Bibliography

Show all

Abbreviations

Show all

Back to top of the page