Rules Are The Key To Building Automated Systems that Reason
If you've been coding long enough, you will find that there is a big problem with solving software problems the imperative way. This fact will become even more apparent to the tech industry soon. In the future, systems that are built to follow the imperative programming style will not be able to scale in the world of ubiquitous computing. The use cases will be too complex and immense for the human to sit down and identify them well enough to build the right software. In addition to that, it is extremely difficult for people to accurately articulate what they want well enough to identify good use cases and turn it into great software. Sadly, most of the industry still takes the approach of trying to anticipate use cases so that they can imperatively program in the logic for handling them for their users.
We will soon come to grips with the fact that we can't control everything. Its also helpful to realize we can't know everything either. Trying to anticipate what a user will do with your software does not make any sense for next generation system design. Think closely about how you would anticipate the usage of the self driving car, or AI assistants, or even AI driven health care systems with all their different possibilities. You'd likely go mad trying to identify all the variables. This is why the imperative way of programming is in its' last days. How do we as engineers design systems for the future?
Luckily, there is a better technique to use while designing systems. In fact most AI based systems implement this technique for dealing with high levels of complexity. This technique employs the establishment of systematic rules. These rules must be well designed, and reliable so that the system has something to fall back on in case it encounters something in data it has never seen before.
Rules are all around us. They govern everything in our society, and without them you’d have complete chaos. It has also been stated that humans follow innate rules that might be the foundation to our moral compass within our consciousness. These innate rules allow us to know right from wrong without even being told. These internal rules give us our degree of morality, and judgement. The point of all this is not to be all philosophical, but to illustrate the fact that if rules can govern the behavior of everything in the universe regardless of complexity, why can’t they be used to govern the behavior of machines.
To build a truly automated system you absolutely must have a rules engine. Creating an automated system without rules is like trying to play a game without rules. I promise it won't work, and you’ll have no idea who's winning. Rules Engines allow automated systems to figure things out on their own so the human doesn’t have to imperatively program every single thing that might happen into the system. Remember, In the future Imperative programming wont get it done. Systems must have the ability to reason on their own.
In my opinion, a system that reasons on its own, is a truly automated system. These type of systems can truly be called smart. The caveat is you will need tons of data to test whether or not your system is acting rationally in any given situation. Luckily today, there is no shortage of data, but there is a major shortage of truly automated systems. This is something that is on the verge of changing as I write this.
How are Rules and Reason related? These two pillars have a symbiotic relationship in automation. Rules will let you know whether or not a system is acting reasonable when its given loads of data to process. Likewise, you know your system is reasonable if its following Rules within a particular domain.
In order to illustrate these two concepts further lets take an example thats familiar. Lets say you needed to borrow money. The first thing you'd do is think about who you knew in your inner circle that you could borrow from. If you listened to reason, it would make sense that you would ask the person that would be more likely to let you borrow the sum you were looking for. It would be completely illogical to ask your broke friend to borrow a couple of dollars because you already know what the answer would be.
Instead you would ask one of your friends that would be more likely to have the money. Before you made your decision, you would need to then think about some rules of engagement with that individual. If you took some time to explore your consciousness well enough you would place constraints on yourself not to ask your friend for an amount that would seemingly set them back. Doing that would be a violation of your own internal rules code for maintaining good friendships. To come to your decision you consider all these factors before finally deciding to ask the given person for what you needed.
What if I told you this same form of thinking and coming to a conclusion can easily be modeled and turned into a Automated System? The only difference is the automated system would be able to consider more possibilities than a human can.
In order to find a solution for the example above, I developed an Elixir based automated system named "VAGABOND" to simulate this thought process. I will show several interactions with VAGABOND that will illustrate how possible it is to have systems come to conclusions on their own, without programming in the given solution. The solution will be completely left up to the system to figure out. This is a completely different approach to development than most of us are use to. Today, engineers spend the majority of their time hard coding solutions into systems by hand rather than designing them to find solutions for themselves. The latter approach saves you a tremendous amount of time and effort. It also makes coding much more fun.
After examining the example above, I prepared a list of 1000 different friends to feed the system. The list is pretty simple.
As you can see above our list includes a name, a dollar amount, and either a "YES" or "NO". The dollar amount was added to the list at random to show the amount of cash the friend has available. The "NO" & "YES" was also chosen at random. This Yes/No criteria simulates the fact that friends will either be willing to lend, or they wont be willing to lend. This is something you cannot control.
As a side note, large amounts of data is very important for smart systems. Data today is completely unstructured, so in order to make sense of it modern systems need to have the ability to parse through tons of it. The fun part about these systems is you can feed it different data sets and it will automatically produce insights based on all the different data sets it was given. In the case above, we have provided VAGABOND with a list of friends that are either willing to lend, or are completely not interested in lending.
Its also important to understand that feeding the system data isn't enough. We have to give it rules to follow. The rules will help the system guide itself toward a reasonable conclusion about the data its given. Based on what we discussed above, there are unsaid rules about asking friends to borrow money. After thinking about this further I narrowed the rules down to these things....
- Don't suggest borrowing an amount that is way more than half of a friends cashflow. For example, if a friend has $50, it would be extremely disrespectful to ask to borrow $50. Instead Asking to borrow $25 is much more reasonable. So here in lies the rule. Don't request an amount from a friend that will be more than half of his or her cashflow. That would just be rude.
- The other obvious thing is you don't want to ask for an amount that is higher than the total amount of cashflow a friend has. Even though they are willing to lend, it still wouldn't be mathematically possible for them to help.
- Don't suggest borrowing from friends you know for sure will say no. That will be a waste of time.
I've developed VAGABOND to reason around only these 3 rules. VAGABOND'S job is essentially to take in a list of friends data and suggest the most likely person to borrow from based on an entered amount from a user. Lets check out some interesting interactions with VAGABOND below...
Now this is pretty interesting. Giving the system $-1000 is completely ridiculous and makes no sense. There is no way to borrow a negative amount. VAGABOND came to this conclusion too, and let us know. Lets try to Ask something else...
Now this is interesting. We gave it something else that was completely insane. Nobody in the world asks to borrow $0, and VAGABOND was able to figure that out too. This is pretty clever. Lets try something a little more outrageous unless of course you have very rich friends.....
Unless you have a circle of very rich friends this is definitely not something rational to ask VAGABOND or your friends. Based on the data it knows about, none of the friends have that much cash flow. Recognizing this fact It rationally came to the conclusion that the amount was bogus. Lets get serious and ask for a rational amount like $500. Lets see what it says then....
BANG!!! Finally, after searching through data for all 1000 friends we have a match! Utilizing the rules the system has concluded to ask Annamarie because she has a large some of cash and she is willing to lend! This is pretty cool. This system has come to a reasonable solution on its own based on known data. There was no need for me to tell it exactly who to choose, and thats the point.
Soon, system design and interaction will be more like this. The user will make a request, and the system will analyze the request based on large amounts of data and then return a precise response to the user instead of returning a large amount of data that the user will then manually have to look through themselves. This is the essence of true Automation, a system that knows how to reason while at the same time following well defined rules with little to no interaction from users.