10 things you should measure during your enterprise social network adoption

With so many companies exploring the idea of implementing a social network within their organisation to connect their employees better, it’s perhaps surprising to find that many of them fail to measure the success of the adoption scientifically. Instead, they rely on subjective measures like “is it working?” and “do we feel better connected?”. These emotional measures are useful and interesting, but they need to be backed up with hard numbers if you are to measure success accurately.

One of the main reasons people don’t measure adoption in this way is because they aren’t sure what the most meaningful measurements are, and what benchmarks they should be aspiring to reach. Here are 10 suggestions of the metrics you should care about. Some of these are available directly from Clearvale’s social analytics, others we typically help customers understand as part of our Social Enterprise Transformation program.

1. Percentage of active network members

Of all the people you invited to join your social network, how many are actually logging in?

Why do we care? Because it’s the most basic indication of whether your network is successful. If people aren’t logging in, none of the other metrics here matter.

What does “good” look like? Ideally, we want to reach 100%. How achievable that is depends on whether you invited just the people who really must participate, or a wider audience of people who you would hope will join in. What we want to see here is an upward trend, although don’t be discouraged by an initial peak (while people log in for the first time to see what it’s all about), followed by a short-term decline – that’s normal. But look for the decline to be short-lived, and for participation levels to recover swiftly.

2. Contribution per user

How many content items, comments and ratings has the average user contributed?

Why do we care? Because we don’t want the majority of members to be just “lurkers”; we want them to contribute. Clearvale tracks the level of contribution very visibly through activity points – the more points you have gained, the more you have contributed. But it is also useful to look at a more granular level and separate the content creation, commenting and rating as this says a lot about how the network is being used. For example, high creation levels with low commenting levels suggests members are perhaps still thinking in the old “broadcast” style model of content management systems, rather than interacting with other members.

What does “good” look like? Again, it depends on the nature of your network, but we would want to see a steady increase before it levels off after full adoption. I typically recommend a target of 100 Clearvale activity points per user per month as realistic for initial adoption. This is equivalent to each user creating one content item, posting one comment and rating one content item per working day – something that ought to be easily achievable in any serious adoption project.

3. Most active users

Who is contributing the most?

Why do we care? Of course, some users will contribute a lot more than others; these people are critical to the success of the adoption and can quickly become “community champions”. So it’s useful to recognise them to thank them for their contribution, but also to illustrate to other members how they are using the network. Of course, some of the most active users may not be good examples – that in itself is useful, although obviously a little more delicate to handle.

What does “good” look like? Ideally, you would see active users from a range of departments/groups, rather than everyone being from the same team. But even if one team is far more active than the others, that is useful to highlight.

4. Participation inequality

Experience shows that considering the “average user” can be dangerous as it fails to account for extremes of behaviour. For example, a few very active users can compensate for a large number of inactive users.This is known as “participation inequality”, often referred to as the 1% rule. There are many different ways of measuring this – typically I measure the percentage of users who contribute 90% of the total network activity, but similar measures at 50%, 75%, etc can also be instructive.

Why do we care? Because a successful social network needs to encourage participation from as many people as possible.

What does “good” look like? In a truly equal network, 90% of the activity would come from 90% of the participants; you are unlikely to achieve that. For internal social networks, I would consider figures under 10% to be poor, over 30% to be good, with most networks falling somewhere in the middle. For external, customer-facing networks, the figures are typically much lower.

5. Non-contribution

How many users are contributing nothing at all?

Why do we care? Because we want to look beyond the distortion caused by average users, and identify those who are not contributing.

Why does “good’ look like? Zero. And this ought to be entirely achievable.

6. Average social reach

How many other members does the average member interact with? There are several different ways of measuring this, e.g. number of people whose content the member has commented on or rated, but what is important is to measure observed behaviour (what they actually did) rather than stated behaviour (e.g. how many people they chose to follow).

Why do we care? Because social networks are all about connecting people. If members are not commenting on and rating each others’ content, they are really not being “social”. Or perhaps they are communicating, but still doing it outside the social network.

What does “good” look like? This depends on the nature of the organisation, and how many other people a member would typically need to communicate with in doing their job. So it will vary greatly from role to role – for example, I typically interact with about 40 other people each month on BroadVision’s internal implementation of Clearvale, but our average reach is around one fifth of that. It is perhaps easier to describe what “bad” looks like in this case – bad is zero, or any other figure below 2 or 3.

7. Non-engagement

The number of members with zero social reach.

Why do we care? Because, again, considering the average user can hide a lot of very low scores. If someone is using the social network but not interacting with other members, they are clearly not using it correctly. So it’s important to know how many people are doing this.

What does “good” look like? Zero active members with zero social reach. This ought to be achievable.

8. Most valued users

Whose content is most valued by other members of the network? In Clearvale, this is measured through content ratings, answering the “was this helpful?” question.

Why do we care? Because simply measuring volume of contribution is, once the network has become established, not that meaningful. In the beginning, yes, what we care about most is that people are using the network, but as the network becomes better established, we want to make sure they are using it correctly, and therefore quality becomes more important than quantity.

What does “good” look like? We would want the list of members with the best content ratings to intuitively match the people who we (subjectively) consider to be the most valued members of the network. If the lists are wildly different, it suggests that the content rating mechanisms are not being used appropriately. Low volumes of content ratings are often a cause of this – it’s important to get into the habit of rating content to provide feedback to the author.

9. Most active communities/groups

Which groups of people are most active?

Why do we care? Because these can be used as showcases of how other groups should be using the network.

What does “good” look like? These most-active communities or groups would be focused on real business topics. It would not be so good if your most active community was, for example, for discussion on favourite movies, sports teams, etc.

10. Inter-group connectivity

A slightly more complicated one to finish with – the volume of communication between different groups/departments of users in the network.

Why do we care? One of the objectives of most social business projects is to encourage better communication between teams. So while good connectivity inside a team is a worthwhile aim, we also want to ensure that high activity levels are not merely perpetuating departmental silos. Therefore it is useful to measure the number and intensity of connections between members and other members outside their own department/team.

What does “good” look like? Strong connections between groups/departments, rather than isolated hubs of internal-facing activity.