- Staffing
- Design New
The need for human connection is a biological and social imperative; loneliness is a warning signal to satisfy that need by seeking out other human beings.
- Fostering Interaction
- Engaging Participants
- Marketing
Evaluating
SECTION 8
It is only through evaluation that value exists.
– Friedrich Nietzsche
How do you know if your shared site is meeting the goals and objectives you identified in your Logic Model?
What impact is intergenerational programming having on participating children, older adults, and families?
Which activities or programs are most effective in promoting intergenerational relationships?
What are the greatest successes and challenges you have experienced?
An evaluation of a shared site involves the systematic gathering of data in order to determine if objectives are being met, and measure the impact of your programs and services on participants, families, and the community. Evaluation is critical to your short-term success and long-term sustainability. It can help you:
- Improve the functioning of your shared site by identifying challenges that need to be addressed and successes you want to repeat.
- Demonstrate the value of intergenerational interaction to administrators, partners, and the wider community.
- Acquire additional funding and meet reporting requirements of funders
- Market your program and services
- Enhance sustainability
- Build capacity to respond to new opportunities and challenges
- Plan for the future
- Contribute to the intergenerational field
Many shared sites, however, don’t engage in program assessment at all or they conduct very limited evaluations. This may be due to a lack of expertise, time, and/or financial resources. Depending on the size and scope of your evaluation, it may be helpful to hire an external evaluator to design, conduct, and/or analyze the data. Sometimes it is possible to find a student or faculty member at a local university to work pro bono or for a reduced fee.
Note: Before determining the kind of evaluation you want to conduct, it is important to develop ways to monitor what is happening at your site. Monitoring is a project management tool that involves establishing procedures to gather and record information about day-to-day functioning (e.g., timing of programs, staff involvement, number of participants). It is an ongoing process.
8.1 Types of Evaluation
8.2 Evaluation Design
8.3 Data Collection and Analysis
8.4 Planning an Evaluation
8.5 Economic Evaluation of Shared Sites
8.6 Additional Resources
8.1 Types of Evaluation
What do you want to assess and when? Are you interested in examining the impact of your overall shared site on participants, the efficacy of specific program components, and/or the success of various activities or use of best practices?
There are various types of evaluation, all of which can help you plan, implement, and assess the effectiveness of your shared site.
Proactive Evaluation
This form takes place before a program is designed. Findings can help planners make decisions about what type of services and programs are needed. A community needs assessment is an example of a proactive evaluation.
Process Evaluation
Process evaluation is used to document the implementation of your activities/programs. It can aid in understanding the relationship between specific program elements and program outcomes. Once you have opened your shared site, you can start tracking participation and monitoring progress toward your goals. A process evaluation is done periodically to measure the success of your strategies in reaching your objectives. It requires the identification of specific process indicators that will help you determine whether your strategies are being implemented as planned.
You may want to explore:
- Level of participation and characteristics of attendees
- Level of satisfaction among staff, older adults, and children
- Extent to which best practices are being used
- Barriers to implementation
- Nature of the intergenerational interaction and relationships
- Relationship between specific program elements and program outcomes (e.g., children’s confidence reading out loud or older adults’ physical activity)
Here are some questions you might ask during a process evaluation:
- How many children and older adults were involved in your intergenerational site?
- On average, how many hours of contact did older adults and children have with each other per week?
- Which intergenerational activities/programs had the highest attendance of older adults and children?
- Did the number of children and older adults involved affect program success?
- Were activities/programs implemented as planned or were modifications needed?
- What strategies were used that supported program success (e.g., steps taken before or during the activity)?
- What barriers to implementation were encountered?
- What strategies were used to overcome these barriers?
- How satisfied were participants with the overall programming? What did they like and dislike?
Intergenerational Practice Evaluation Tool
One of the areas you may want to assess is the extent to which staff are using best practices in the implementation of intergenerational activities. The Intergenerational Practice Evaluation Tool was developed by Dr. Shannon Jarrott in 2019 for Generations United. There are two parts to this tool: one that measures the use of best practices in activity planning and implementation; and another that focuses on progress toward goals. Program leaders may find that parts are useful at different times. For example, a group new to program evaluation might prefer to start with Part 2, which allows for open-ended identification of goals and documentation of progress towards those goals. After some time, they might incorporate Part 1 to gather more specific practice and outcome details. Download a print-ready version of the Intergenerational Practice Evaluation Tool.
For a fuller description, go to Intergenerational Evaluation Toolkit pages 5-18.
Outcome Evaluation
An outcome evaluation measures the impact of a program and addresses crucial questions about program effectiveness by analyzing its immediate results and long-term impact. Typical outcome data measures include:
- Increases in knowledge
- Changes in attitudes or values
- Modification of behaviors
- Improvement in conditions
- Increase in the number of supportive relationships participants have
To evaluate outcomes, you will need to measure the degree to which you achieved your desired results. Here are some examples:
For children:
- Improvements in cognitive functioning (e.g., expressing feelings, problem-solving, reflection)
- Improvements in socioemotional development (e.g., ability to cooperate, communicate, engage with others, and express empathy; level of confidence; feelings of security)
- Improvements in physical abilities (e.g., fine and gross motor skills, eye-hand coordination, sensory development)
“Our students’ understanding of respect takes on a whole new meaning when they interact with our grandmas and grandpas. They also learn tolerance and acceptance of physical differences when they get to know a resident who carries an oxygen tank or who has difficulty speaking.”
— Suzanne Lair, Jenks School District
For older adults:
- Improvements in cognitive functioning (e.g., attention to detail, decision-making, problem-solving)
- Improvements in socioemotional development (e.g., life satisfaction, mood, self-confidence, independence, loneliness)
- Improvements in physical abilities (e.g., range of motion, alertness, eye-hand coordination)
For families and caregivers:
- Reduced stress
- Increased confidence in the quality of services provided to children or older adults
- Increased awareness of the value of intergenerational relationships
For staff and administrators:
- Improvements in job performance (e.g., use of evidence-based practices)
- Improved job satisfaction (e.g., higher retention)
- Greater cost-efficiency (e.g., lower turnover, shared expenses across departments or organizations)
For the wider community:
- Increased public awareness of the benefits of connecting children and older adults under one roof
- Increased visibility of your intergenerational shared site in the community
The Intergenerational Evaluation Toolkit includes a chart (LINK TO PDF OF CHART) listing some outcome measures for youth and older adults and instruments to measure those outcomes. These same measures can be used as a pre-test before you begin a longer-term series of activities and as a post-test after your program has concluded in order to assess the impact of the program. You do not need to use every measure; rather select ones that best reflect your reasons for providing intergenerational programming. It is best not to measure impact after a single event, aside from conducting a satisfaction survey. Creators of the instruments typically recommend a determined period of time should pass between measurements.
For more information on the specific instruments, go to Intergenerational Evaluation Toolkit pages 27-44.
8.2 Evaluation Design
A good evaluation design will address your measurement questions while taking into consideration the nature of your program, what program participants and staff are comfortable with, your time constraints, the audience for your evaluation, and the resources you have available for evaluation. Whether or not you plan to use an external evaluator or conduct the evaluation internally will greatly affect the size, scope, and design of your evaluation.
Although an experimental design using randomized control or comparison groups is the gold standard of scientific testing, it may be very difficult to use this approach in shared-site settings because you don’t want to deny services to any of your participants. The following are two commonly used designs that may be more realistic:
Pre-/Post-Test Comparison: Data are collected from participants before and after the program and then compared. This type of design assumes that a difference in the two observations will indicate whether there was a change over the period of time measured. It cannot tell you whether the changes would have occurred without the program. Pre-/post-test data can be either quantitative (numeric) or qualitative (narrative) and can assess changes in the group as a whole or changes in individuals.
Mid-term and End-of-Program Data Collection: Frequently, evaluation and data collection are postponed until the end of a specific program. In these cases, evaluators typically survey or talk to participants about their experiences after certain activities have been completed. This method gives participants time to reflect before sharing their opinions, but it does not show changes in behavior, knowledge, or attitudes. In a shared site with a range of ongoing intergenerational activities, this approach could be used at specific points in time rather than at the beginning and end of a program.
For more information on evaluation design, go to Community Toolbox.
Whatever design you select, it is important that it is aligned with the Logic Model you developed during the Planning Phase.
8.3 Data Collection and Analysis
Think about who should be involved in the evaluation—older adults? children? staff and teachers? caregivers? It is helpful to get the perspectives of all those affected by your program. Both process and outcome data can be collected using either quantitative or qualitative approaches.
Quantitative evaluation refers to something that can be measured numerically. Examples of quantitative data are the number of participants in a program, ratings given on participant satisfaction surveys, and scores on instruments that assess knowledge or specific variables, such as loneliness and self-esteem. If an evaluator helps to administer these assessments, it is important that they do it in the same way with each individual to reduce the chance of biased results.
Qualitative evaluation uses narrative data such as participants’ responses on open-ended surveys, comments collected during interviews or in a focus groups, field notes taken by an observer, journal entries, visual images, and notes based on videotapes of intergenerational programming. Potential questions to address include:
- Participants’ feelings about intergenerational programming and what it means to them
- Activities participants liked and disliked and why
- What people learned
- Ways programming might be run differently
- Activities participants might like to do in the future
Make sure to document particularly poignant quotes and/or stories from participants. These responses can add richness and depth to your evaluation and also be used in your public relations effort.
Combining quantitative and qualitative evaluation has proven to be a more powerful strategy than just carrying out one or the other. Use as many methods as you can to capture the changes that are occurring at your shared site.
When you analyze your outcome data, make sure to take into consideration the number of intergenerational activities in which individuals engage and the duration of that engagement as well as the characteristics of those participating (e.g., age, level of functioning). Evaluation findings should be shared with all key stakeholders and presented in a way that is easily understandable.
Tips!
- You may need to make some accommodations based on the age and abilities of children and/or the functional ability of older adults. For example, a teacher may have to help a child fill out an evaluation form, even if it is designated as age-appropriate. Persons with cognitive challenges may need a quiet room with no distractions to answer questions about an activity in which they just engaged. When adults or children cannot provide answers to measurement questions, there are often observational scales or surveys of caregivers that can capture the child’s or older adult’s experiences.
- When you are designing your evaluation and/or analyzing your data, consider how factors such as race, religion, age, gender, sexual orientation, country of origin, and primary language can affect whether and how people respond to evaluation questions.
- Keep thorough attendance records and program logs. It is helpful to document observations of intergenerational activities so you can track what does and does not work.
- Involve a team of people with a variety of perspectives in your evaluation efforts to draw robust conclusions. Engaging them in developing and interpreting evaluation data adds depth to the knowledge gained and can strengthen support for intergenerational efforts.
8.4 Planning an Evaluation
The Intergenerational Evaluation Toolkit includes guidelines for planning and implementing an evaluation. Use the Intergenerational Program Evaluation Plan template to help you in this process.
For more information, go to Intergenerational Evaluation Toolkit pages 19-25.
8.5 Economic Evaluation of Shared Sites
While the psychological aspects of mixing generations are often reported by agencies, limited research has been conducted on the costs and benefits of an intergenerational shared site versus an age-segregated site. This kind of quantifiable information can help guide decisions related to program choice and level of investment. A recent article by Vecchio et.al (2020), suggests that an economic evaluation of an intergenerational care program should use a quasi-experimental design that involves the nonrandom allocation of participants into control and intervention groups. This will help determine if changes are the result of the intergenerational activities or due to natural changes that might occur among the different groups over time.
8.6 Additional Resources
- Connecting Generations in Senior Housing: A Program Implementation Toolkit (2019)
- Intergenerational Programmes Evaluation (2009)
- Universidad de Granada & Generations Working Together (2020). Monitoring and evaluating intergenerational programmes. Unpublished Module from online course International Diploma in Intergenerational Learning.