Artificial Intelligence is hitting its marketing peak and the marketing industry.

Our Chief Product Officer Chris Francia’s insights on the aspects of AI and how it drives our disruptive innovation in digital advertising….

You would be hard-pressed to find any company with roots in the digital space not mention it as part of their current or future growth plans. Yet, as with any new technology that burst onto the marketing scene, there is a fair amount of misinformation about it.

For starters, AI will neither be the doom of our society nor will it magically solve all of our problems. But outside of those two umbrellas, AI can assist and improve current workflows and technologies. This is especially true in scientific or medical applications, where AI is able to draw conclusions and inferences on data better than humans and, more importantly, faster. There is a wonderful article from the BBC in 2017 detailing some great use cases of AI. It’s well worth the read.

 

However, despite all the great things AI is doing, it has been limited by one crucial flaw…speed.

When artificial intelligence is used to accomplish a task that requires heavy cognitive thinking it takes time for it to complete its analysis and respond. For some use cases waiting a couple of seconds to get a response isn’t a problem, but for others, like High-Frequency Trading, a couple of seconds is a lifetime. To get a handle on why this occurs and why it’s an issue, we have to understand the basics of AI.

 

What is AI?

AI may be defined as the full process by which a machine learns and attempts to solve a problem. To accomplish this, AI must have data to learn from; this is considered the “Training Data” (TD).

The TD can use “supervised” or “unsupervised” techniques depending on the situation. It may be stock reports or images of cats, in the end it’s all data to the machine. Once your TD is set up and properly prepared, the machine can begin to look at the data and learn from it. This is called “Training”: the machine analyzes the data to discover relationships between the values provided and accomplish a task.

If my task was to find what hour of the day is the worst time to sell stocks, the machine would analyze TD that provided stock trades, prices, and times. It would then analyze that data to see if it could spot any patterns or relationships that fit the assigned goal. The machine will continue to train until it is able to provide an accurate result on the data. Once the results have been defined as accurate the training is considered complete, and a “Model” of the TD is generated.

 

In layman’s terms, the Model represents a very complex graph, and it becomes the basis for the AI’s problem solving going forward. The AI does not know anything outside its model, which is why one AI might be good at detecting cats in a photo but terrible at detecting dogs. With the model ready the machine will load it into a “Prediction Engine” (PE) which will act similar to a web server; it will take in requests and spit out responses.

The PE is what other systems or humans will interact with and is the face of the AI, be it Alexa, Siri or Watson. The TD, the Model and the PE constitute an AI system. Some AI systems have developed ways to retrain automatically with new data so it gets more accurate as it continues to make predictions. The more you ask Siri, the smarter she becomes because she is able to train on more data, so on and so forth, but the principle still holds. We won’t dive into all the particulars on how to build the model but do recommend reading “The history and current methodologies of AI detailed”.

 

Fast But Not That fast

What has been explained above takes time to accomplish, and unlike a web server it is not a simple static reply but a complex analysis on every request, so your response is significantly slower. It also doesn’t help that most major AI Libraries are written in Python; while it is a great language one of its most consistent drawbacks is its speed. I know by making that statement we have touched on a fierce debate about Python optimizations but that is a debate for another article, like [this one].

Returning to the response time or latency of a prediction, in a run of the mill AI set up on a single machine you can optimize your system to return predictions within 1-2 seconds depending on the complexity of your task. However the more predictions you want to run the slower it becomes. If I needed to handle 10 predictions per second, and using the above range it would take 10 – 20 seconds to complete all those predictions since the predictions take longer than 1 second you will end up with a compounding effect where the predictions will overlap causing additional strain on your resources. This is a basic problem when scaling any type of system and for AI it is solved by growing the resources behind the PE to accommodate the strain.

Companies like Amazon have built this auto-scaling system into their AI offerings to simplify the development process. However, even with greater resources, the Prediction time is only lowered so much. Some may be able to lower certain predictions to the mid-high milliseconds but if you are talking about Real-Time Bidding advertising applications or Stock Trading, even a significant amount of milliseconds is devastating. The current solutions for those industries is to scale back the complexity of the AI system, which in turn will affect its performance, or utilize the AI outside of a real-time environment, thus restricting the AI’s ability to make a meaningful impact in that industry.

 

New Approaches Emerge

At Kubient, we spent the last two years solving this precise problem in real-time digital advertising and addressing ad fraud. In solving the latency issues we have been able to crack the code with our disruptive innovation. Enabling our AI to prevent ad fraud before the bid vs identify it post. Also solving the significant “programmanual” friction issues in the digital our of home DOOH channel and enabling is to become OpenRTB.

 

“Artificial Intelligence, and more specifically Deep Learning, opened up a whole new toolbox to solve many of the problems we saw afflicting the Digital Advertising Space. However because of the latency requirements of Real-Time Bidding many of those tools sat unused,” says Christopher Francia, Chief Product Officer at Kubient.

 

“We talked to a lot of different advertisers and publishers in this space and had a strong understanding of what the problems were and what was currently out there to address them. But the more we analyzed it the faster we came to the realization that Deep Learning was the only sustainable way to solve them.”

 

The Kubient team put all its resources to work developing a new Deep Learning AI system from the ground up, specifically designed to overcome the latency hurdle without sacrificing accuracy.

 

“It was a tall order, and I think for a few months there we lost a lot of sleep wondering if we were throwing everything at an impossible problem.”

 

However, Kubient’s risk paid off, and as of October of 2018 Kubient released their AI system, K.A.I. Kubient Artificial Intelligence.

 

“It was probably the thousandth test we had done that week, we were seeing results around 100-200 milliseconds, which was on par with other systems but nowhere near where we needed to be. After a few unconventional ideas we ran more tests and the results really surprised us.”

 

Kubient’s initial test of the K.A.I. system was focused on a price prediction and yielded predictions in less than 2 milliseconds. Their target was 10 milliseconds.

 

“When we saw that number I think we almost passed out from joy. It is very gratifying when you exceed your own high expectations. We chose a fairly straight forward task but felt encouraged that a similar approach would yield the same results.”

 

Those results were in fact echoed for both Fraud Prevention and Optimization systems.

 

“Fraud is probably the number one topic talked about in the industry, and it is hard to determine an exact figure on how much fraud is actually occurring on a daily basis. Normally you get a range from as low as 10% to as high as 30% so it just shows you how much of an unknown it is.”

 

According to Juniper Research businesses will lose upwards of $42 Billion dollars to fraud in online advertising this year alone. Making it one of the most lucrative and fastest-growing illegal activities on the planet.

 

“I think the biggest problem is the fragmentation of fraud protection. Each company focuses on different aspects of fraud, whether it’s bot detection or viewability. It becomes very expensive for companies to protect themselves and there is still doubt as to what fraud exists that is not being caught. I think that is where Kai will really make an impact.”

 

With the ability to utilize this new Deep Learning system, Kubient has been able to detect patterns and relationships between different advertising requests and determine with high accuracy what is fraudulent and what is not.

 

“How we do it is a closely kept secret–we didn’t spend a year in development just to provide everyone else the blueprint to our success. But we can say with confidence it works and it is getting smarter every day.”

 

Not only did Kubient solve the latency problem of AI, they also found ways to reduce their costs. The K.A.I. system is incredibly lightweight and can even be powered by a smartphone and achieve similar results. This has allowed Kubient to scale the system to handle the throughput Real-Time Bidding generates.

 

Even though Kubient is a digital advertising company, the Kai system may have an impact far outreaching that space.

 

“I think this system, or at least the underlying principles of it, could be used in many applications outside our space. We have had people ask us about potential military uses. Getting the speed to where it is allows any system that utilizes it to do a lot more with the same resources. One of the great things about Kai is it can run parallel predictions very easily and cheaply, in terms of system resources.”

 

While still young, Kai is already proving its merits, and showing how just because the status quo is good enough doesn’t mean it shouldn’t be better. AI in and of itself has a long way to go before we reach the type of AI we see in movies and television, but creating a faster AI is certainly a step in the right direction.