SubmitX is an all-in-one machine learning, intelligent automation, machine inference and production platform

Capture domain expertise, train it with raw data and publish it in the SubmitX Collaboration Platform.

Use simple SubmitX microservices to consume expertise shared by domain experts.

Create new features, add a convoluter node, edit the kernel, add new hidden layer, remove an activation unit with SubmitX Visual Model Editor.

Upload raw file

Click on SandBox -> Upload Raw File in the menu-bar.

Choose a csv (or comma separated value) file and press Open.Once the raw file is successfully uploaded, it will be visible under “rawfiles” under the “Sandbox” tree in the left navigation panel.

Open an empty model

Click on New button in the menu-bar

An empty model would be created named “Untitled”. If you are not interested in creating new features from your raw data, press the Auto Create button in the menu-bar, otherwise for creating new features, go here.


Auto-Create model

Click on Auto Create button in the menu-bar

Select the same raw csv file (as uploaded to sandbox in previous step) from which the model has to be created. If the raw data is a time series data, choose time series in the Column Significance dialog and select the predictor, index and time column(s). For cross-section data choose the predictor column only. The IDE guesses the column type as either Categorical or Continuous . To change it, double click on the node and select the appropriate NEURONSTATETYPE.

A raw data is represented as sensory neuron and a predictor is represented as action neuron . A raw data must be converted to a feature or interneuron before it can be connected to an action neuron. It can be transformed to a feature using the feature engineering nodes.

Save the model

Click on File -> Save in the menu-bar

Before saving the model, update the RAWFILE field of the action neuron with the name of the raw csv file.The model is saved in the sandbox with extension .brn and an unique identifier called BRNID is assigned to the model.The BRNID of the model is visible in the bottom panel of the left navigation bar.

Learn using Autopilot

Click on Tune button in the menu-bar

Select AUTOPILOT as the Supervised Learning Algorithm. Press Learn

Note : You can choose a different algorithm other than “Autopilot” from the drop-down menu. In that case, you have to manually enter the value of the hyper-parameters of the chosen algorithm in the respective algorithm tab.

Autopilot is not an algorithm by itself but it helps to choose the best algorithm. It also gives advice on the value of the hyperparameters of the chosen algorithm.

View Log files

Double-click on the log file in the left navigation bar.

The log file will be downloaded into the <INSTALL_DIR>/downloads directory.

The file naming convention is log_<BRNID> _<Task ID>_<YYYY>-<MM>-<DD>-<hh>-<mm>-<ss>.log

Deploy/Publish model

Click on Tune button in the menu-bar

Select AUTOPILOT as the Supervised Learning Algorithm. Press Deploy

Note : You can choose a different algorithm other than “Autopilot” from the drop-down menu. In that case, you have to manually enter the value of the hyper-parameters of the chosen algorithm in the respective algorithm tab.

The difference between Deploy and Learn is that Deploy updates the model memory with the learned parameters whereas Learn doesn’t.

After a model is deployed and relevant access permissions are granted, it can be used by any external system for prediction and forecast through RESTful APIs

Verify Predict (Postman)

API

celeriacmldevops3.ap-south-1.elasticbeanstalk.com/knowledge-hotline/knowledge/predictCategorical/{brnid}/{viewid}

brnid is the BRNID of the model and the viewid is the ID of the red node or the predictor node.Hover your mouse over the red node and you’ll find the ID in the tooltip.

Cloud Native

Hosted on public cloud. 100% RESTful microservices for machine learning jobs

Collaborative

Work in a team and display your achievements.Share machine learning lifecycle specific permissions to others.

Secure

Access token based login.

Auto-Modelling

A library of inbuilt linear & logistic, tree based, perceptron based and time series models.Based on the input data, applies the relevant model(s) and invokes “auto-tune” until the best result is achieved

Auto-Tune

Uses the Bayesian Optimization technique to find the best set of hyperparameters that has the highest probability of giving the best model diagnostic.

Auto-Prepare

Removes redundant data points, imputes missing values and performs numerous basic transformation operations during data loading like cleansing of numeric fields, handling of date formats and so on.

Graphical Feature Engineering

A drag-n-drop tool to create complex data enrichment pipeline using prebuilt feature engineering nodes (FEN). The tool generates a portable rule file.

Distributed Architecture

The “SubmitX Learning Server” collects aggregated (compressed) data from multiple “SubmitX Rule Engine” worker processes distributed across a cluster before starting the learning process.

Hot Deployable

Instant access to prediction and forecast from external applications (native apps, web apps etc ) as soon as model is deployed.

Featured Insights