Posted by: Carole-Ann | March 27, 2014

My first ‘Hello World’ rule

This blog has now moved. This post was originally posted on the Sparkling Logic blog.

Let’s go back to basics and explore how business rules work.  For that purpose, I am using the unavoidable ‘Hello World’ project.  I will show you today that agility can be obtained without a whole lot of complexity…  My first-grade son could do it, so it can’t be that hard ;-)

Before I start writing rules, let me take a very quick detour.  To make the exercise more concrete, it helps to have test cases so that we can see how the rules apply as we write them.

Read More…

Coin TossThis blog has now moved.  This post was originally posted on the Sparkling Logic blog

Have you ever heard of Champion / Challenger?  If you are in the Financial Services industry, chances are you have and you might even have used this technique.  If not, I wouldn’t be so sure you have.

What is Champion / Challenger?

In a nutshell, the idea is to compare two or more strategies in order to promote the one that performs the best.

When your decision logic implements regulations or contracts, the business rules are pretty much defined in stone, and very little is left to interpretation.  But when you are making decisions as to what product you should recommend, or which transaction is most likely fraudulent, or how much you should increase an account’s credit limit, there is no right or wrong business rules.  You have to deal with some degree of uncertainty.  Should you target users based on age or type of smartphone?  Should you set your threshold here or there?

In order to know which strategy performs the best, whether it is a simple threshold update or a completely new approach, you have to actually try them in real life.  Although business rules make it easy to switch from one strategy to another, swapping from your Production rules to the new strategy might introduce a level of risk that is not reasonable…  It might also be tedious and biased to try one strategy, then another, then another sequentially…  Not to mention the inconsistent behavior as seen from the outside…

Champion / Challenger addresses all of these issues by deploying all of them at once.  You might wonder how this can be done since you usually have to pick ONE decision, not a bunch of them.

You might be familiar with the concept of AB testing in advertisement…  The idea is to publish different ads to a market segment and measure which one performs the best.  In order to protect the integrity of the experiment, you have to randomly select who gets A and who gets B, and potentially a few more alternatives.  On websites, marketers try different headlines, styles, or content.

Champion / Challenger does the same.  The champion is your Production strategy (aka decision logic, business rules).  You can make it compete with one or more challengers (aka variants of the decision logic).  You do have control as to how many transactions with go through one or another of these strategies within a segment, but you will let them be selected at random.  As Champion / Challenger selects the strategy, it keeps track of the assignment for measuring its performance.  You can track real-time how well each strategy is performing, and possibly shut down or tweak some experiments that are under-performing.

After minutes, days or weeks, you have gained enough evidence that one of the challengers is doing much better than the others, and promote it as the new champion.  You may want to create new variants to compete against it once again.

The beauty of this mechanism is that you have full control over the percentages.  If you are risk-adverse, you might want to run experiments on only 5% of your portfolio and exclude your VIP customers.  If you are more aggressive, you might want to spread your eggs into multiple baskets equally and get faster results.

Knowing which strategy was applied is critical for consistent behavior over time.  In some cases, regulations force you to keep track of the rules that were in force at the time of the decision so that you reuse the same business rules if you need to re-process the application.  Even when regulation is not imposing it, you may want to run experiments across decision services through a customer session, or across sessions for a better user experience.  Champion / Challenger provides the infrastructure to do just that.

Why can’t I just run a simulation like before?

Well, referring back to the illustration, the alternative is a coin toss when it comes to changing your strategy.

Complete coin toss?  I sure hope not.  I am sure that you do a fair amount of testing before publishing your business rules in Production.  By running your historical transactions through several variants of your decision logic, you can get an idea of their business performance, assuming that past behavior is consistent with current behavior of course.

There are some aspects of your decision that you cannot anticipate in simulation though.

For example, you can check how many times you would have proposed offer A versus offer B, let’s say a free smartphone versus 12-month discounted service.  But you cannot easily estimate how many customers would have accepted one offer or the other.  The only way to know for sure is to actually offer the new ‘product’ to a some of your customers and see how they react.

How hard is it really?

Not as hard as you think…  We believe that you should focus on what you want to experiment with, and not on what the plumbing looks like to assign the strategies and keep track of it.  So the work is really just to define the variants and ‘sign them up’ for an experiment.  The rest is magic!

If you’d like to see it live, register for a free webinar on 2/26/2014 at 9am PT

Posted by: Carole-Ann | February 3, 2014

What is Decision Management?

This blog has now moved.  This post was originally posted on the Sparkling Logic blog

I am often asked what Decision Management is.  If you do not know what decision management is, and yet you too are wondering, this video is for you.

The purpose of this video is to explain through a use case what problem we solve and how.  And when I say ‘we’, I really mean the industry.  This is not a commercial about SMARTS, it is an educational video that could apply to any product.  We believe we do it better, but that is a completely different topic ;-)

This video describes a use case — our traditional insurance underwriting example of course.  But keep in mind that these technologies can be used in any type of decision-making application where the logic is complex, owned by business analysts or business users, and changing often.

Feel free to contact me or post a comment if you have questions about Decision Management.  I love sharing what I know about my passions!

Posted by: Carole-Ann | January 27, 2014

Decision Management Predictions for 2014

This blog has now moved.  This post was originally posted on the Sparkling Logic blog

PredictionAs it is customary, let me share what I foresee as being big this new year… I would like to focus on just three points that are striking me as important in no particular order.

1. Predictive Analytics

Well, of course, we have been seeing that trend develop for a while. This is certainly not a surprising entry in this list.

The fact is that we see more and more projects combining predictive analytics and business rules. What is really interesting to me is the fact that more and more business analysts are getting trained to develop some of these predictive models.

Given the data scientist shortage, it make total sense. If you do not have a modeling team in-house or if it is swamped with high priority projects, you may as well look for other ways to leverage the available data to inform your decisions.

I am optimistic that we will see more business analysts add predictive analytics to their skill set.

2. Business Intelligence

Sticking with analytics at large, I see also a greater synergy between business intelligence and business rules. We have talked about ‘Operational BI’ for a while now, but there seems to be a lot of activity finally taking shape.

I believe that there will be more projects that actually combine both in 2014, allowing companies to act on the gained from monitoring historical trends.

3. Internet of things

When I was still in my early years, we dreamed of ‘intelligent’ equipment, cars and other things that would make our life easier. While embedding computers in all things around the house has been cost prohibitive for the mass market back then, the Cloud is now making it a reality.

The beauty of having ‘things’ that can communicate is that they are immediately candidate for ‘higher intelligence’. By hooking them up with a decision service on the cloud, we can seamlessly allow them to act more appropriately and subtly to signals they sense around around. They can better adapt since changing their behavior does not involve any hardware changes, or more generically any changes in-situ. The intelligence is located on the cloud, readily available for all connected things.

I am totally in awe with the progress we have made thus far, and the potential for a global ‘increase of intelligence’ of the things around us. The future is now!

Shash HedgeShash is sharing an interesting business model his customer has developed. He figured out a way to guarantee a level of power while reducing the operational costs by augmenting the battery life. While manual inspection and industry knowledge has been the key to this business, automating maintenance allows the company to sustain their rapid growth and remain ahead of their currently nonexistent competition.

The change of format for this talk is really cool.  Shash assigned tasks to the audience!  They can ask any questions, but the idea is to brainstorm on a possible solution for the customer problem.  This is one of the most existing and productive exchange between attendees so far!

The solution architecture can be summarized as follows:

  • Sensors collect data and publish it in the Data Warehouse
  • Business Rules and Predictive Analytics serve as the intelligence
  • A work order is issued to the service technician

The objective is to reduce the time to intervention from a few days down to 4 hours once the solution will be deployed.

The solution relies on several self-service platforms:

  • Analytics: Data Warehouse + Predictive Analytics + Tableau
  • Cloud: Microsoft Azure
  • Decision Management: Sparkling Logic SMARTS (Yeah!)

I liked the ‘decision fatigue’ concept.  Like President Obama or Steve Jobs not ‘wasting’ their decision capital on picking clothes, research shows that the more decisions people make per day, the lesser the quality.  As the objective is to reduce the number of technician errors, automation using decision management appeared to be an appealing solution.

I really enjoyed that talk!

Posted by: Carole-Ann | November 5, 2013

Decision CAMP 2013 – CTO Panel

CTO PanelNeil Raden is the host for our CTO Panel at Decision CAMP.  His victims are:

  • our own Carlos – Sparkling Logic
  • Jacob Feldman – OpenRules
  • Gil Segal – Sapiens
  • Mark Proctor – Red Hat

The conversation goes fast to Business Rules and Analytics of course.

Carlos says that Analytics market is much bigger than Business Rules.  SAS only is a couple of billion $$.  The difficult part is the operationalization of analytics, and this is the opportunity for Decision Management.  Deployment of models is made easier.  That was a big discovery for the modelers.

Predictive models can provide additional insight that can improve business rules.

The additional element of the convergence is business optimization.

Jacob jokingly accused Neil of claiming that optimization was very complicated, limiting its adoption.  Personally I think optimization, especially constraints, are fantastic technology, but I have always been wary of the sensitivity of the modeling approach.

Taking a step back, and looking at business user empowerment.  Gil is arguing that business users can be empowered by following a strict and rigorous approach.

TDM is the model – The Decision Model

DMN is a notation – Decision Model Notation

Carlos claims that a single representation cannot satisfy business users for the entire decision lifecycle…  Because exceptions happen over time breaking the selected rule metaphor.  Or because users need a different view for different activities (capture, approval, etc.).  Like developers need different tools, business users need different representations for their decision logic.

Neil tells the story of Gorilla in the Mist, which ended up being a success because they let gorillas write the script.  How do we let business users write the script?

Carlos: business users need collaboration and social features to interact and capture comments

Interesting comment from Neil on collaboration being divide-and-conquer in the way Gil presents it.  Well, sometimes it is.

Why is the business rules market so small?

We need to do better on the user empowerment side.

Jacob disagrees.  It is not small; it is huge.  The biggest obstacle is knowledge.

Carlos says that it is small because it is fragmented.  BRMS are only one way to implement business rules.  They exist in other form in existing systems.  Carlos is hopeful because data is getting much more easily available, creating opportunity for decisions to be automated.

Data is central.  Inducing business rules from data is obviously something we believe in at Sparkling Logic…  Attend our BluePen workshop at lunchtime if you would like to learn more about it or PayPal’s talk on Wednesday.  Jacob invested in rule induction as well.  One good point that Carlos makes is that induced rules are actually understandable by domain experts.  This is very important for regulated industries where decisions are subject to legal compliance.

Mark is not as convinced that automatic rule generation is a good thing…

Why don’t we have more business rules libraries?

Carlos: because we do not have standard data models

Thoughts about Watson…

Machine learning is fascinating.  Carlos highlights that the most fascinating aspect is the ability to forget…  Data is not stationary, so being able to learn and forget is key to adaptive behavior.

What percentage of rules is reusable?

Rules engines are really good at fitting in, inherently.  Business rules captured in a BRMS are protected and reusable.  The key thing though is that they rely on a given object model.  As long as the underlying model is fairly consistent, you are fine.  Replatforming when the data model changes is another story.

Sparkling energy in this CTO Panel!  Great job, guys!

Charles ForgyIt is a great honor to have Charles join us for the very first Decision CAMP ever.  Charles does not need any introduction of course.

Today he is presenting a keynote on ‘writing concise rules’.  I will upload the slides after the presentation.  It contains detailed examples of techniques you can use for more elegant rules writing.  I think that Carlos is going to have his own blog about it too.

Rules become come interesting when dealing with collections as they ‘pattern match’ the characteristics that you are looking for.  Unfortunately, in the absence of set operations, the rule syntax can become complicated.  Charles is describing concrete examples that illustrate that point.  The beauty of set operations is that they allow to just describe what is needed and they ‘just do it’.

The point of this presentation is to show how set operations are more concise, easier to understand, and they remove the need for ‘markers’ that pollute the rules.  Set operators can have an impact on runtime performance as well.

I loved the plug for SMARTS :-)

Let me tell you more about my own experience going through the same realization…

This is actually where we see rule syntax and SQL-like expressions come together.  Carlos and his team have been very complimentary of Microsoft LINQ.  As the ‘business’ user, I tend to reluctantly learn yet another syntax.  But, as they showed me what could be done with LINQ, I became fascinated by its expressive power.  Some pretty complicated rules can be more easily described in a SQL-like fashion.  Let me give you an English-like example: looking for the youngest driver that has at least one accident.  Without a set operator, you would likely go over each driver, make sure they have at least one accident, and compare them to all other drivers that also have one accident…  Double patterns are something I have learned to avoid.  But with a set operator, you just look at the set of drivers, filter for those that have at least one accident, and take the minimum age.  Simple!

We loved LINQ so much, we called our version of it, in SMARTS, SparkLINQ.  Who said that engineers were not good at marketing?  I found it clever and cute :-)

Posted by: Carlos Serrano-Morales | November 5, 2013

Writing simpler rules

Dr Charles Forgy delivered a  presentation at Decision Camp 2013 today on how to write simpler rules.

This is a new type of presentation from Charles – a presentation from the perspective of the user of rules rather than the implementer of a rules engine. This switch in perspective from the father of Rete is interesting.

Charles sees a fundamental shift taking place in the expression of rules. We are switching from instance-oriented rules to set-oriented rules – rules that express conditions based on set conditions, and that fire on the conditions met at the set level rather than the object level. Charles makes the distinction between:

  • conditions that returns a collection of objects
  • conditions that select one object from a set of objects
  • conditions that compute a value out of a collection of objects

Of course, these conditions may perfectly well be combined.

Set-oriented rules are:

  • are more concise
  • are easier to understand and manage
  • remove the need to implement “tricks” in rules of object model

They also may have a significant impact on performance. Charles took a look at WaltzDB - 4 of the rules in this venerable benchmark are clearly set-oriented rules. In any run of the program, two of the rules fire (one time each) – which on the hardware he uses results in a 4.3 sec run time.
Now, when rewriting these rules into smarter set-oriented rules, the performance on the same hardware for these rules is orders of magnitude better – 0.1 secs.
After applying set-oriented rules to WaltzDB, the overall execution time on his hardware went from 109 sec to 63 secs, with a significant gain in understandability and manageability.

Different engines support set-oriented rules in different ways – with more or less syntax artifacts. Charles presented examples in OPS/J, Jess, Drools and SMARTS.

Charles provides the following recommendations:

  • Set oriented rules are more concise and easier to understand
  • Set oriented rules may yield significant performance enhancements
  • Adding the ability to add user-defined aggregates significantly enhances the expressive power of the rules

Disclaimer: the next paragraphs are related to the product we, at Sparkling Logic have brought to market: SMARTS(tm)

We strongly believe that set-oriented rules are a significant tool in the hands of business users. Very frequently, the business problem to solve is naturally expressed in terms of set operations:

if the shopping basket contains at least 2 food items of different categories and a magazine and if the customer has made purchases of at least $50 in the last 3 weeks, then offer promotion X

A business rules such as this one is not simple to express without powerful set-oriented capabilities – without them, the business rule will be implemented in complex set of iterating rules, or mostly in the object model, making its management a technical ordeal, and not something a business user can tackle.

SMARTS provides high level constructs to handle set-oriented rules that have the expressive power of SQL but are nicely integrated in the business rules management system.

Posted by: Carole-Ann | July 31, 2013

Hot Tech Trends for Machine Learning

I joined the Churchill Club this morning for an exciting breakfast on Machine Learning.  In May 2013, Steve Jurvetson of DFJ said on the Churchill Club stage that he believes machine learning will be one of the most important tech trends over the next 3-5 years for innovation and economic growth.  I was eager to hear what Peter Norvig and the other guys would say about that.

No surprise

What might be surprising is that none of them painted an ‘unfathomable’ picture of the future.  It was all about more power, faster modeling, more data…

Star Trek InnovationVision?

I can’t say that they shared a vision…  I wonder if we have all been dreaming in our young years, watching Star Trek, and super-computers fueled our imagination.  Super smart machine able to assist the crew, and eventually perform medicine or look for their ‘humanity’, is the vision.  We are all working hard at figuring out ways we can make it real, ways we can build technology that achieve the ideals we grew up dreaming about.

It has been a rocky road for Artificial Intelligence, but in the past few years, Watson, the self driving car and other wonders have made us believe that machine learning could actually live up to our expectations, and more.

Takeaways

So what were the key takeaways of this discussion?

We need to embrace uncertainty.  We have been thinking about decision support being a single recommendation or prediction, but we need to expand that model.  The prediction could be more complex, represented as a range rather than single response, having multiple alternatives, maybe plotting black swans.

One of the challenges we face when dealing with uncertainty is complexity.  It take really good data scientists to model for uncertainty and to represent it clearly.  It takes experts to comprehend the results.  The data scientist shortage is in the way of more prevalent data-driven decisions around the world.

One avenue to address the talent gap, is to ask more from the machines.  Machine Learning, or rather our usage of it,  needs to evolve to reach the next level.  This is why Google has been investing on deep learning, aiming to build intermediary representations.  A complex problem that we have been tackling for decades.  All of this work is starting to come to fruition in image or speech recognition, gradually improving what machines can do, can understand too.

Another avenue is to make the technology more accessible.  One gentleman asked about Machine-Learning-as-a-Service, making these capabilities readily available in the Cloud.  Several solutions are actually available in the Cloud, SMARTS BluePen is one of them.  So accessibility in the form of ‘technology you can access from your laptop’ is not the issue.  The issue is to make is accessible in terms of skills.  Gurjeet Singh made the point that we need to expand the audience for Machine Learning capabilities to users that are not data scientists.  I wholeheartedly agree with that.  Data scientist are desperately need for heavy analytics, but there are tons of other use cases where a data-driven insight given to a business user at the right time would deliver a huge impact.

I understand that data being the lifeblood of Google, there are very few efforts if any to pursue expert knowledge combined with data-insight.  That is a shame, I think, because there is a tremendous value in combining both.  Compliance application are 100% on the business rules side.  Established trend detection is 100% on the data side.  But there are a lot of shades of grey in the middle, like fresh trend detection where the data has not yet accumulated.

I am not surprised though that this crowd is not thinking that way yet.  In a past conversation with Larry Rosenberger, he shared with me how much of a revelation it was to him when he realized that ‘an ounce of knowledge is worth a ton of data’.  He had lived most of his life so far in a world dominated by data, and he may not have opened his mind to business rules without the opportunity knocking on his door.  One is not better than the other, they are both first class citizens of better decisions.  But until you start thinking in terms of toolbox, you might love your hammer for anything that looks more or less like a nail…

Posted by: Carole-Ann | April 2, 2013

A Practical Guide to Business Objectives and KPIs

Examining KPIsAs we hosted a recent webinar series on “Getting to the Best Decision”, focused on testing and improving business rules, several questions came up on KPIs — Key Performance Indicators.  It is true that we tend to use many different terms for the same thing.  You may have heard us or others speak of business objectives, KPIs, metrics, calculations, characteristics, features or variables.  Are they synonyms?

I’d like to suggest ‘my’ definitions.  Feel free to comment on whether you agree or not, and share the nuances your recommend.

Business Objectives:

Business objectives are the high level goals for the initiative.  The current project aims at increasing your profitability, reducing your costs, improving your customer satisfaction, adhering to compliance mandates, etc.

In addition to the ‘direction’ of the expected outcome (increase or decrease), The business objectives may state a tangible goal: obtaining 80% automation or more, decreasing fraud by 10% or more, etc.

Business objectives are often defined by the management team, before they invest in a project.  Although the corporation or institution has captured those business objectives in the business plan, they are not often clearly documented in Decision Management initiatives.

Metrics:

Metrics are measurements that support your Business Objectives.

If you want to increase your profitability, the metric that matters here will be the formula that defines profitability.  Will you measure ‘revenue – expenses’?  Will you need to track profitability using a much more precise calculation?

If you want to improve customer satisfaction, the number of metrics involves could be much more than one…  Will you measure the number of happy feedback on support calls?  The number of support calls?  The results of customer surveys?  A sentiment analysis on your twitter hashtag?

There is no right or wrong as it relates to the metrics you decide to track.  The most important recommendation in my opinion is to have SMART goals (not to be confused with SMARTS, our decision management system):

  • S for Simple,
  • M for Measurable,
  • A for Attainable,
  • R for Relevant,
  • T for Time-sensitive.

Define the set of metrics that you can measure.  Make sure that the supporting data will be available and reliable.

KPIs – Key Performance Indicators:

KPIs are a subset of the metrics that you track, used to monitor the health of your project, or your business performance.  While you may want to have visibility into a multitude of metrics on your business, only a handful roughly will matter ultimately to measure how well you are doing on your business objectives.

In the mid-90′s, I used to build dashboards.  We defined hundreds of KPIs.  The executive dashboard only displayed a handful, maybe a dozen, KPIs on a summary view.  The other metrics were used to investigate, to drill down, as a top indicator turned red.

Calculations or Variables:

Metrics are calculations, in the sense that a formula dictates how to come up with the number or label.  It does not mean that all calculations are metrics!

Many calculations exist in Decision Management projects for the purpose of combining or preprocessing data elements:

  • Mathematical formula: the ‘debt to income’ ratio for example
  • Statistics: the average number of accidents per driver for example
  • Binning: the age group or purchase category for example

These calculations are typically defined to support the business rules: business rules can refer to these calculations or variables in the same manner they also refer to the input fields.

Calculations may be tracked over time, like any other input field, to analyze whether the demographics are shifting.  If the average age increases significantly as the population ages, you may want to rethink your decision logic that takes age into account.  This is a useful data-point, but it would likely not make it into your KPIs.

Characteristics or Features:

Characteristics and features may be calculations or input fields that you take into account in your decision logic.  This terminology if most often used by data scientists in charge of predictive modeling.

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.

Join 66 other followers