Skip to main content

How is the Workspace score calculated?

Breaks down the calculation of the Accoil Analytics score based on event occurrence and weights, for understanding engagement scoring.

Kate Caldecott avatar
Written by Kate Caldecott
Updated over a month ago

Summary

Breaks down the calculation of the Accoil Analytics score based on event occurrence and weights, for understanding engagement scoring.

How this helps

Provides insights into the engagement scoring process, allowing for refined event weighting and more accurate scoring.

What goes into the Score

Your Accoil Analytics score is based on two things:

  1. Events - the actions users take in your product.

  2. Event weights - The importance assigned to each event.

That’s the foundation. Everything else builds from here.

How the Score is Calculated

Imagine you're tracking engagement in a CRM app. You might define your event weights like this:

Event

Weight

Create New Lead

9

Schedule Meeting

7

Log Call

5

Send Email

3

Update Contact Info

1

Now let's say a user performed these events over a specified period:

Event

Count

Weight

Score
(Count x Weight)

Create New Lead

3

9

27

Schedule Meeting

5

7

35

Log Call

10

5

50

Send Email

20

3

60

Update Contact Info

15

1

15

Total Raw Score

187

This gives us a Raw Score of 187. But raw scores alone don’t tell the full story — they need to be scaled to mean something across the board.

In order to give you a more “usable” and easily digested, we normalize everyone’s scores to a number between 1-100.

Normalization of Scores

To make engagement scores more meaningful, we scale them to a range of 1 to 100 using an exponential formula. This takes the full range of activity into account — especially at the higher end.

Here's how it works:

  1. Calculate all raw scores based on the score configuration

  2. Find the 90th percentile (this becomes the benchmark)

  3. Apply an exponential transformation that normalizes scores relative to that point

This ensures:

  • The highest engagement scores represent true power users.

  • Scores remain dynamic as user activity trends shift.

  • A fair benchmark for comparing engagement across different accounts.

Example: Normalization in action

Let’s say these are raw scores across a group of users:

[475, 89, 101, 7, 3, 21, 2, 149, 223, 1, 13, 9, 37]

The 90th percentile here is 208. Based on that, here’s what the normalized scores look like:

Raw Score

Normalized Score

475

90

223

66

149

51

101

38

89

35

37

16

21

10

13

6

9

4

7

3

3

1

2

1

1

0

Key features of this normalization:

  • Unlike linear scaling, it provides better differentiation between lower scores

  • Higher raw scores show continued improvement but with diminishing returns

  • The transformation naturally handles outliers without artificial caps

  • Scores remain proportional to actual engagement levels

Account Scoring

We use the same process to score accounts — just at a broader scale.

  1. Add up activity across all users in an account

  2. Normalize that score using the same 90th percentile method

The outcome?

  • Accounts with more engaged users will generally have higher scores.

  • Accounts with fewer active users will score lower.

  • Scores evolve as activity levels shift over time

This approach ensures that Accoil Analytics provides a comprehensive and fair assessment of user and account engagement, enabling you to make informed decisions based on accurate data.


Understanding Relative Scores

It's important to note that scores are relative to the overall engagement across all accounts. This means that maintaining the same level of raw activity doesn't guarantee the same score over time. Here's an example:

Day 1:

  • Account A Raw Score: 100

  • 90th percentile threshold across all accounts: 200

  • Account A Normalized Score: 39.3

Day 30:

  • Account A Raw Score: 100 (unchanged)

  • 90th percentile threshold across all accounts: 400 (increased due to higher overall engagement)

  • Account A Normalized Score: 22.1

This decrease in score doesn't mean Account A is doing worse – they're maintaining the same level of activity. Instead, it indicates that other accounts have increased their engagement levels, raising the overall benchmark.

This relative scoring approach:

  • Reflects real-world engagement patterns where "good" engagement levels evolve over time

  • Encourages continuous improvement rather than maintaining static activity levels

  • Provides context for how an account's engagement compares to the current user base

  • Helps identify accounts that may need attention even if their raw activity hasn't decreased

When you combine raw activity with score movement over time, you get a much clearer picture of how your users or accounts are really doing.

Did this answer your question?