Is it not time companies also devised methods to monitor and evaluate their corporate community involvement programmes?
In the depths of recession, corporate community involvement programmes are rightly under examination. Do they represent value for money? Can the claimed “business benefits” really be demonstrated? Should not conventional management disciplines be applied here as elsewhere in the business? The answers should be YES, but the difficulty is how.
Simplicity
At its heart, the issue is straight-forward, despite the often-used business-school jargon, which can induce a feeling of confusion and guilt in hard-pressed community affairs managers already struggling to implement a programme without adequate staffing. Monitoring is simply setting up a system to record and analyse information on a regular basis. Evaluation is taking that information and using it to assess whether the programme is achieving the objectives set at the outset. Of course, without having clear objectives for community involvement, no company will be able to evaluate whether it is succeeding in meeting them.
Benefits from evaluation
Value for money is the key consideration. For a company simply with philanthropic motivations, this is just a question of how effectively their charitable donations secures community benefit. However, other companies see community involvement as a business activity and want their programme to meet business objectives, wider than simple community benefit, including:
raising staff morale
recruiting and retaining staff in skill shortage areas
enhancing corporate reputation
securing local community `licence to operate’
building customer loyalty
influencing investors and regulatory authorities
developing a reliable and cost-effective supplier base.
The challenge for companies is to develop methods to measure the effect of the community involvement programme in meeting these objectives, both community benefit and direct business benefit.
In addition to achieving and demonstrating value for money, monitoring and evaluation can secure better management of the company’s limited resources:
rigour in defining at the outset the objectives of the community affairs programme
monitoring systems to record management information – how many appeals received, how many staff volunteer, etc
data to fine tune the programme and correct errors,
evidence to demonstrate results in terms of business benefit.
Faced with competing claims within the company and between elements of the programme, a system of monitoring and evaluation permits judgements to be made based on hard evidence.
More can be achieved from existing expenditure on research. Some companies already poll external and internal attitudes, whether the general public, opinion formers or staff. This can cost significant sums. At little extra cost, questions can be devised and added to yield considerable information about the impact of the community involvement programme.
Finally monitoring and evaluation also helps the community partners in the process, enabling them to be clear what is expected and allowing them to show results.
Measuring impact
Despite having said that, at heart, the issue is straight-forward, difficulties do arise in finding simple ways to assess complex outcomes, particularly when the community programme is only one factor (say) influencing corporate reputation. There are three levels at which to monitor and evaluate
input: what resources are put in – usually this is monetary, is a good control measure and is easy to compute;
output: what is produced for the given input – this starts to be a good assessment of real worth of programme and is not too difficult to measure
impact: what is actually achieved – the most useful but hardest to measure.
An example best illustrates the process.
Practical example
How would a system of evaluation work in practice? Suppose a company is located in an area of high unemployment, but has some difficulties recruiting and retaining young people of sufficient calibre. Several options exist, for example links with local schools or support for training schemes. Suppose the decision is to fund a voluntary sector training scheme of young unemployed people. How can this be evaluated and how can comparisons be made with alternatives?
First, what is the real community benefit? Appropriate input, output and impact measures would need to be agreed at the outset with the project organisers. For example:
input: how much money is spent per trainee in running a course?
output: how many people successfully completed the course?
impact: how many people secured jobs within three months?
Next, how has the donation helped in meeting business objectives? For example, support for the training project assists in securing its local `licence to operate’ since high unemployment is a major local concern. It improves wider corporate reputation. In employee terms, it also secures a better quality of recruit who will be retained longer.
Appropriate measures on reputation might include:
input: number of press releases to local paper; a reception/awards ceremony for civic dignitaries;
output: column inches of broadly favourable publicity; attendees at event;
impact: improved rating as a “good corporate citizen” over a twelve month period in a survey of local opinion formers.
Appropriate measures on staffing might include:
input: number of jobs filled directly through the training scheme;
output: fall in staff turnover rates as well-trained employees stay with a “good” employer;
impact: lower recruitment cost through fast track interviewing and lower turnover; better “good employer” rating in internal staff attitude surveys.
In the examples above, only quantitative measures are cited. In some circumstances, qualitative measures are also essential and are no less valid, provided the judgements are compared to initial expectations.
The BP experience
The above example only looked at one project. How could a whole programme be assess, or individual projects compared? Professor Alan Smithers, during a year-long attachment to BP’s Educational Relations term, helped to develop a value assessment process, applying a business-like methodology for rational decision-making. This was published in November 1992 until the title Assessing the Value1.
BP has an extensive programme of support for education which it regards not as a charitable activity but a business tool for enhancing its reputation and licence to operate, and for investing strategically in social infrastructure which has impact on BP’s business.
The approach adapts a tool often used in BP for decision-taking, plotting options on a graph. In this case, one axis shows the benefit to the company and the other, the benefit to education. Provided a meaningful assessment can be made so projects are accurately placed, a project in the top right-hand quadrant achieves high benefit both to company and education and perhaps warrants further investment. A bottom left project is a candidate for divestment. Projects elsewhere on the graph can be examined to see if a higher benefit can be obtained to move them into the all-important top right-hand quadrant.
In seeking to place projects on the graph, three questions are asked:
what are the company and its education partners seeking to achieve?
how well does a particularly activity meet the objectives?
how much does it cost?
BP has defined ‘benefit’ in terms of precise objectives, such as, for the company, improving staff motivation and earning goodwill, and, for education, better career choices and access to resources. Adapting a technique from social psychology based on ranking the benefits in order of importance, a numerical score is obtained, so allowing plotting on the graph. It also allows “what-if” tests: if a project is adapted to yield (say) better credit for BP’s support, does the score of company benefit improve? Does it move across the graph?
A detailed explanation of the scoring system is outside the scope of this piece. In brief, it relies on qualitative judgements by individuals in the company and education partners who are familiar with the projects, working together in day-long sessions. It is therefore a technique best used to evaluate larger programmes where trust has been built up between partners. The qualitative judgements can be supplemented by external test, such as market research into corporate reputation.
The technique can be expanded in a number of directions. For example, where data is available, benchmarking allows one company to compare its activities against those of other companies. Furthermore, the different elements of a community involvement programme, such as arts, education and environment, can be compared.
The BP method is a very valuable contribution to this sometimes confusing issue. Even if the results have limitations, the process itself is highly beneficial to the company in thinking through what it is trying to achieve and how better to go about it. If corporate community involvement in this country is to grow, community affairs managers will need to find ways to demonstrate what their programmes are achieving. More pioneering work along the lines of BP is needed before that can easily be accomplished.
1 Assessing the Value by Alan Smithers and Chris Marsden is available price £10 from BP Community and Educational Relations, BP HOuse, Breakspear Way, Hemel Hempstead, Herts HP2 4UL. Phone 0442 232323
Corporate Citizenship Briefing, issue no: 8 – February, 1993
COMMENTS