What is

So what exactly is this thing we are talking about here?

In one sentence is a machine learning service in the cloud that lets you predict almost anything without learning data science.

In two paragraphs helps no-coders and automation developers to cover larger parts of the workflows with bots. Often the bot stops when a human decision is needed, and this is exactly where steps in. Using historic data that your process has handled over time, it is able to make predictions and decisions that help the bot run autonomously, or the human to make the right call quicker. When we talk about the decisions, we think of things like "which team should solve this customer ticket", or "should we use an automation to reply to this tweet". machine learning cloud solution can be used from a range of no-code and automation tools such as Airtable, Integromat/Make, Zapier, Robocorp, UiPath, Automation Anywhere, Blue Prism. can work without a single line of code, but the software developers can harness the full powers through API and Python SDK.

As described by our lead scientist Christoffer

[He has been so busy with building the platform that this text is still missing. Sorry!]

The difference covers the entire machine learning workflow in one cloud-hosted solution, aimed for no and low coders.
Workflow step
Traditional ML approach approach
Feature engineering
User is expected to perform several steps of feature engineering (imputation, feature scaling, one-hot encoding, grouping, standardization, ...) to produce training data suitable for most machine learning algorithms.
Upload data from an existing data source such as Airtable or CSV file. No other steps needed.
Model construction
User needs to choose a fixed prediction target, and then the most suitable algorithm and it's parameters, typically requiring data science knowledge to get right. works fundamentally differently: based on queries that generate predictions in real time. Users do not need to know the details of the science behind.
Deployment and hosting
After the model is trained, the user wraps it in an API for production use, and deploys it on a server, needing to take care of the scalability and performance as well as maintenance tasks.
There are no "models" in, so this step is made completely irrelevant. When you have data in, you already have API for querying predictions. We take care of the scaling.
Retraining with new data
Every new datapoint means that the model needs to be retrained, and deployed again to production. This pipeline needs to be maintained, managed and monitored in order to keep the accuracy high, and all things running fresh.
With, every new datapoint automatically contributes to next prediction. This happens without any user action.