
Micropersonalization
The best 'next best action'

Bruce Ho
bruce.ho@orangepeople.com
Bruce is chief data scientist at OrangePeople. In addition to helping clients harness the power of data, he is an avid cloud and IoT enthusiast.
Micropersonalization
Optimizing content for an individual viewer is personalization. Engaging each customer with the right incentive, at the right time is micro-personalization. The industry term for this capability is
next best action (NBA). The first step towards achieving this capability is getting 360-degree view of the customer. This entails combining all information sources on the person from internal CRM, to social media, EDW, call center records, purchase history, click trail, user reviews, and others. The information must be de-siloed into one data lake, and equipped with streaming capabilities.
There are two basic types of customer profile data: implicit (age, income, geolocation, profession, family size, etc.) and explicit review information where a customer actually tells you how much
she likes or hates a particular product. The well-known collaborative filter uses only the explicit review information and suffers from what’s called “cold start” problem. Advance modeling techniques can apply both types of information to detect the most effective means of engaging a customer even the most reclusive ones.
To accurately derive NBA for an individual customer, you need more than just the profile information. The other crucial support information is her funnel state (unaware, aware, interested, active, conversion). Such is a state is obviously hidden from direct view, but can be
inferenced using observable behavioral data. The model is considered complex, and requires high level of sophistication in its implementation. Modern statistical techniques involving computer simulation are now able to tackle marketing challenges of this magnitude.
The general process for generating predictive models is to:
1. Integrate all data sources
2. Create data pipeline (data cleansing, filtering, transformation)
3. Run hypothesis against the data, validate results
4. Productionize the model (massive paralyzation, instrumentation)
5. Deployment, A/B test, collect feedback
6. Iterate and improve the hypothesis
In summary, the mathematics work is preceded by an elaborate engineering stage which requires deep levels of expertise in big data technology. The advanced modeling techniques also require more than the usual statistician libraries, due to performance considerations during deployment.
Micro- personalization is within the grasp of any forward-looking company who embraces both big data engineering and advanced predictive modeling.