Illumination Works is utilizing an artificial intelligence method called well-formed formula to create metadata structures and build business rules engines for customers. This solution provides business analysts and developers with a consistent and straightforward way to manage complex business rules and improve reporting.
Overview
The business rules engine is a low-level type of artificial intelligence that unifies rules and processes to a single location for increased data accuracy, flexibility, and ease of maintenance. Data architects work with stakeholders to set guidelines for what happens when certain conditions are met, and then build complex rules engine code that combines those conditions and applies the rules incrementally to come up with a final and ultimate decision.
Benefits
- Increased flexibility with improved accuracy
- Rapid identification of imbalance cause and remedy
- Easy to maintain (add new, modify, remove rules)
- Unifies rules and processes to a single location
- Allows for advanced ‘what if’ rules scenarios
- Can be applied to any type of industry data (financial, healthcare, DoD, energy, etc.)
The rules engine focuses on automation and is developed leveraging an agile methodology following best practices. Implementing or modernizing a rules engine to assist a customer’s business success follows four key steps.
Step 1: Identify & Apply Limited Scope Master Data Management
Key to the success of a rules engine is a concise, limited scope with strictly applied master data management that weaves the data aspects through people, process, and technology. Establishing master data early creates the fundamental building blocks for proper execution of processes shared across business units and systems. Master data management promotes process efficiency, simplicity, and data quality by identifying a single version of the truth that runs through the entire organization to improve the value technology brings to the business.
Step 2: Develop Business Rules Tables, Structures & Metadata
After the rules are identified and agreed upon by stakeholders, data architects build a set of metadata tables with a series of complex rules that tell the engine where to look up more rules if needed. The engine typically comprises one or two primary data tables and a series of about ten supporting tables to enforce very strict, referential integrity (built on the limited scope MDM) that boils down to isolate the answer, without the need for natural language processing. Each component in the architecture is designed to contain highly flexible rules to allow for “what-if” iterations of rules to be applied to data. This process results in higher confidence by comparing good results with better results.
Step 3: Bring Data from Various Sources into a Unified Landing Area
In addition to business rules, the rules engine houses ingestion rules. Data architects leverage Illumination Works’ proven Ingestion Framework methodology and apply this to the rules engine. This metadata-driven approach allows new sources to be added at any time with pre-built templates. Load generation is template based and controlled by definition files that place the data in layers. Tailoring the generated code is accomplished by making changes to the templates, enhancing ease of maintenance. Incoming transactions can be cleaned, prepared, and split into batches (for performance) in the staging table if required. This approach enables speed to market and analytics on multiple sources at one time as well as provides an easy way to add and remove data sources.
Step 4: Make Data Available to Users for Analysis & Reporting
The final step is to create business intelligence or a web user interface to enable users to perform queries on the data and create reports. Data architects and front-end developers work with users to identify their query and reporting requirements to develop the user experience. User needs may range from dashboards and canned reporting to data scientists working with raw data to apply coding languages such as Python and R for analysis and insights.
Detailed Discussion
The rules engine consists of a dictionary comprising about 100 words. If natural language processing was used, the vocabulary would end up being very large (thousands and thousands of words) and complicated with conditions like, something has already happened, it is the second occurrence or the last time, etc.
Ambiguity is removed in the rules engine by boiling it down to just a handful of words in the language. This is key. With a limited scope dictionary, the engine is essentially being told what it can bring in, meaning the data has to fit into one of the restricted words in the vocabulary; therefore, the rules engine knows exactly what to do with the data.
To help put some context around the rules engine, let’s say a business has several hundred extract, transform, and load programs and they need to add another attribute, which would essentially equate to another word in the language or dictionary. Leadership might say to the IT department, we have this new thing that we want to do, that we want to measure on, do costs on, do allocations on, or that we want to incorporate into our business. The IT department’s response could realistically be that because the attribute is in several hundred programs, it will take a couple months to research how many months it will take to actually do the work to add the attribute or word. This is a very real and painful barrier to the business being able to expand and grow, accommodate new market channels, integrate new types of business, perform new customer segmentation, and so on.
Foundational to a business’ success is its ability to improve and be competitive, and anything that restricts that can be devastating. What the rules engine does instead is, when leadership says they want to add a new attribute, the IT department simply puts it into the list of approved words in the dictionary or vocabulary along with the respective rules, and they are set. If the business wants to add new attributions, year after year, month after month, or they want to drop an attribution, with the rules engine, IT adds or removes it in the dictionary and that is the only work that needs to be done to pivot and accommodate new features, new attributes, or removed attributes. The rules engine provides businesses with an amazing savings in time, which can afford significant competitive advantage.
In addition to the ease of maintenance and ability to change and flex, there is the benefit of increased accuracy. While the rules engine can benefit any industry, an example we have seen is in the accounting industry, where they compared the results from our rules engine to the previous year. While the discrepancies were small, the rules engine was more accurate. This is primarily because the rules engine has a very strict vocabulary and is strict about processing the rules, which provides the foundation and data governance, a very limited and hard data governance. This affords the confidence and a strong platform to stand on to say when “x” event occurs, “y” happens. This is the beauty of the rules engine – leveraging complex code and artificial intelligence, the engine knows exactly what the data means and knows exactly what to do with it. There is no ambiguity.
Wrap Up
Illumination Works’ Business Rules Engine is easy to maintain (add new, modify, or remove rules) and deals well with requirements and data model changes as well as rules optimization. The unification of rules and processes to a single location enables rules re-usability and provides a way to closely monitor and manage rules that affords businesses with the ability to respond quickly to change with improved data accuracy and precise business rules executed in an automated manner. To learn more about how a rules engine can help your business, contact Gary Telles.
About the Author
Gary Telles is the Executive Director of the Commercial Division at Illumination Works. Gary has been a partner with Illumination Works since 2010 and brings more than 25 years of raw programming and data experience. For 17 of those years, Gary led and mentored projects featuring data management, big data, artificial intelligence, BI/DW, data quality, and analytics. Gary’s mantra is to implement IT solutions better than anyone else.