Definitions for design principles

Outcome: the direct, intended beneficial effect on the stakeholders or interests our work exists to serve.  Outcomes may be hard or soft but all require measurement using appropriate tools and processes.

Measure: the tool or means of tracking progress toward the outcome over time.

Watch a short introduction to design principles and the co-design process by Robert Penna (10 minutes):

View this video on StudioQ for a written transcript and related resources

The following design principles were developed by the joint government and non-government sector co-design steering group and are considered important to the design and implementation of an effective approach to outcome measurement. 


The content of outcome measures and the process of measurement needs to be client focused and acknowledge the complexity of individual circumstances and needs.

Measurement and reporting on evidence of outcomes will not compromise the quality or timing of providing services, nor will it limit service flexibility or constrain innovation in service response.

Measurement at the level of the individual, service and program funder

Outcomes will be measured at the level of the individual (as a key part of individual service delivery), the service (measured by aggregating individual outcomes achieved within a service against a specific contract), and the whole of program investment (measured through population data accessed by government).

Evidence based

Determination of outcome measures will be informed by the best available evidence relevant to measuring the intended outcomes reflected in the program logic.


It is acknowledged that outside influences can impact on the achievement of outcomes. Outcome measures will be attributable to and within the span of control of the program, as well as immediately applicable to individual or family participation in program activities. It is acknowledged that outcomes are out of the span of control of service providers once clients have exited the service.

De-identified narratives or stories of individual experiences and qualitative evidence generally will be valued and incorporated into reporting processes. This will support understanding of interconnected influences on client outcomes, identification of influences beyond the program, and of unmet and under met need.

Consistent and transparent

Counting rules for all outcome measures at each level of data collection will be developed at the time of new investment.  This will ensure consistent and comparable data collection, aggregation and reporting across contracted services, and at the program level, from the beginning of the program. These rules will be transparent to clients, service providers and government.


Efficiency will guide the design and implementation of outcome measurement:

  • outcomes measures will only be used when it is appropriate to the context or activity type; not all activities will require outcome measurement
  • data collected will complement not duplicate data already collected by government
  • where possible, existing systems will be used to collate and/or upload outcomes
  • the investment required to articulate, measure and report on outcomes will be proportionate to the investment in direct service provision.

Attentive to resource requirements

Implementing outcome measurement will take into account the resource and capability required to collect and report outcomes. The impact will be different depending on the organisation size, number of service sites, geographic spread, range of program areas and current skills, approaches to and systems for collating and reporting on outcomes.

Continuous improvement

A continuous improvement approach will be used to establish outcome measurement. Maturity in the extent and effectiveness of outcome measurement will build over time.

Last modified: Thursday, 17 December 2015, 9:27 AM