Technology posts

Introduction to Sentiment Analysis Algorithms

Sentiment Analysis is the use of natural language processing, statistics, and text analysis to extract, and identify the sentiment of text into positive, negative, or neutral categories. We often see sentiment analysis used to arrive at a binary decision: somebody is either for or against something, users like or dislike something, or the product is good or bad.
Sentiment analysis is also called opinion mining since it includes identifying consumer attitudes, emotions, and opinions of a company’s product, brand, or service.

Sentiment Analysis Use Cases

The use of sentiment analysis is frequently applied to reviews and social media to help marketing and customer service teams identify the feelings of consumers. In media, such as product reviews, sentiment analysis can be used to uncover whether consumers are satisfied or dissatisfied with a product. Likewise, a company could use sentiment analysis to measure the impact of a new product, ad campaign, or consumer’s response to recent company news on social media.

A customer service agent at a company could use sentiment analysis to automatically sort incoming user email into “urgent” or “not urgent” buckets based on the sentiment of the email, proactively identifying frustrated users. The agent could then direct their time toward resolving the users with the most urgent needs first.

Sentiment analysis is often used in business intelligence to understand the subjective reasons why consumers are or are not responding to something (e.g. Why are consumers buying a product? What do they think of the user experience? Did customer service support meet their expectations?). Sentiment analysis can also be used in the areas of political science, sociology, and psychology to analyze trends, ideological bias, opinions, gauge reactions, etc.

Challenges of Sentiment Analysis

People express opinions in complex ways, which makes understanding the subject of human opinions a difficult problem to solve. Rhetorical devices like sarcasm, irony, and implied meaning can mislead sentiment analysis, which is why concise and focused opinions like product, book, movie, and music reviews are easier to analyze.

Sentiment Analysis Algorithms

Veltrod provides several powerful sentiment analysis algorithmsn to developers. Implementing sentiment analysis in your apps is as simple as calling our REST API. There are no servers to setup, or settings to configure. Sentiment Analysis can be used to quickly analyze the text of research papers, news articles, social media posts like Tweets, and more.

Social Sentiment Analysis is an algorithm that is tuned to analyze the sentiment of social media content, like tweets and status updates. The algorithm takes a string, and returns the sentiment rating for the “positive,” “negative,” and “neutral.” In addition, this algorithm provides a compound result, which is the general, overall sentiment of the string.

Veltrod is specialized in providing deep learning solutions across the domain. Write to for free consultation and quotes for business needs.


Adding multilingual support to any algorithm: pre-translation in NLP

We often get asked about if we’re planning on adding any non-English NLP algorithms. As much as we would love to train NLP models on other languages, there aren’t many usable training datasets in these languages. And, due to the linguistic structure of these languages, training with pre-existing approaches doesn’t always give the best results.

Until better training sets can be generated, one passable solution is to translate the text to English before sending it to the algorithm.

In order to make it easier to integrate language translation within your algorithms, we’ve added Google Translate as a wrapper in our marketplace. We’re going to look at the pros and cons of pre-translation in NLP algorithms.

Use Case 1: Social Media Image Recommender

Now the algorithm works on all of the human languages supported by Google Translate. This is awesome, because most NLP tools don’t work on other languages, but here we were able to support a lot of them just by adding a few simple lines of code to pre-translate the text.

Use Case 2: Sentiment Analysis

Like most other NLP algorithms, Sentiment Analysis, works well on English because the majority of NLP research has been done on English language. This is especially true for the Sentiment Analysis algorithm, because it relies heavily on a model which was trained on a golden dataset.

For this popular algorithm, we’ve added the option of specifying the source language, or letting the algorithm automatically detect it. This allows the algorithm to work on a translated version of the text, which might not yield perfect results, but still works fairly well considering that we merely pre-translated the text.

While pre-translation works for many algorithms, there’s one important requirement: the output must be independent from the input. What does that mean?

Sentiment Analysis is an example that follows this rule. Regardless of the input, the output is a range between two numbers, and doesn’t return any part of the input (such as extracted words or phrases). Pre-translating the input might not give perfect results, but in a world that doesn’t have good NLP tools for other languages it works pretty well.

This would also work for Named Entity Recognition, but not quite as well: NER returns parts of the original text back to the user. We could pre-translate the text, detect the entities, and translate it back to it’s original word with its corresponding entity tag. The double translation may cause information loss, and might return a completely different word in the output. This is not ideal, and is why pre-translation is not recommended for semi-independent NLP algorithms for the sake of consistent outputs.

Veltrod is specialized in providing deep learning solutions across the domain. Write to for free consultation and quotes for business needs.


Introduction to Natural Language Processing (NLP)

What is Natural Language Processing?

NLP is a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way. By utilizing NLP, developers can organize and structure knowledge to perform tasks such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation.

What Can Developers Use NLP Algorithms For?

NLP algorithms are typically based on machine learning algorithms. Instead of hand-coding large sets of rules, NLP can rely on machine learning to automatically learn these rules by analyzing a set of examples (i.e. a large corpus, like a book, down to a collection of sentences), and making a statical inference. In general, the more data analyzed, the more accurate the model will be.

  • Summarize blocks of text using Summarizer to extract the most important and central ideas while ignoring irrelevant information. 
  • Create a chat bot using Parsey McParseface, a language parsing deep learning model made by Google that uses Point-of-Speech tagging.
  • Automatically generate keyword tags from content using AutoTag, which leverages LDA, a technique that discovers topics contained within a body of text.
  • Identify the type of entity extracted, such as it being a person, place, or organization using Named Entity Recognition.
  • Use Sentiment Analysis to identify the sentiment of a string of text, from very negative to neutral to very positive.
  • Reduce words to their root, or stem, using PorterStemmer, or break up text into tokens using Tokenizer.

Open Source NLP Libraries

These libraries provide the algorithmic building blocks of NLP in real-world applications. Algorithmia provides a free API endpoint for many of these algorithms, without ever having to setup or provision servers and infrastructure.

  • Apache OpenNLP: a machine learning toolkit that provides tokenizers, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, coreference resolution, and more.
  • Natural Language Toolkit (NLTK): a Python library that provides modules for processing text, classifying, tokenizing, stemming, tagging, parsing, and more.
  • Standford NLP: a suite of NLP tools that provide part-of-speech tagging, the named entity recognizer, coreference resolution system, sentiment analysis, and more.
  • MALLET: a Java package that provides Latent Dirichlet Allocation, document classification, clustering, topic modeling, information extraction, and more.

A Few NLP Examples

  • Use Summarizer to automatically summarize a block of text, exacting topic sentences, and ignoring the rest.
  • Generate keyword topic tags from a document using LDA (Latent Dirichlet Allocation), which determines the most relevant words from a document. This algorithm is at the heart of the Auto-Tag and Auto-Tag URL micro services.
  • Sentiment Analysis, based on StanfordNLP, can be used to identify the feeling, opinion, or belief of a statement, from very negative, to neutral, to very positive. Often, developers with use an algorithm to identify the sentiment of a term in a sentence, or use sentiment analysis to analyze social media.
  • NLP algorithms can be extremely helpful for web developers, providing them with the turnkey tools needed to create advanced applications, and prototypes.

Veltrod is specialized in providing deep learning solutions across the domain. Write to for free consultation and quotes for business needs.


What Are the Prospects for Deep Learning?

The data science community is constantly on the hunt for the next blockbuster multi-use algorithm. Ease of use and interpretability have made logistic regression and decision trees analytic staples. But their accuracy and classification stability leave something to be desired. So the industry keeps searching for an algorithm that can decipher key patterns and signals in data.

A long line of fad techniques has come and gone. Deep learning is the latest darling of the data science set. But how likely is this latest algorithm to stick around for the long haul?

Let’s take a stroll through a brief history of machine learning development to shed some light on deep learning.

What Does History Tell Us About Deep Learning?                                     

What does all this history tell us about the prospects for deep learning? If you pull a Google Trends report you will see search traffic for the term “deep learning” has grown exponentially over the past five years compared with stable interest in the other algorithms.

Deep learning is a type of machine learning that uses a many-layered neural network to produce highly accurate models. Deep learning has taken off for two reasons. First, organizations of all sizes finally have access to more and bigger data, including unstructured data. Second, the massive computing power necessary to train models on big data in reasonable time frames is now available at an affordable cost.

Is there a downside to deep learning? Deep networks with five or more hidden layers commonly have millions, if not billions, of parameters to estimate. One of the risks when fitting big models is over fitting the training data, which means the model can have poor predictive performance. Many deep learning techniques help prevent over fitting, such as pruning in decision trees, early stopping in neural networks and regularization in SVMs. But the most simple and efficient method is to use more data.

Deep learning is popular for three “big” reasons: big data, big models and big computations. Big data requires big modeling, and big modeling requires big computations (such as graphics processing units [GPUs]).

Of course, these factors also create big challenges for deep learning. Big data is expensive to collect, label and store. Big models are hard to optimize. Big computations are expensive. At SAS, we’re focusing on coming up with new ideas to overcome these challenges and make deep learning more accessible to everyone.



Bring Deep Learning to iOS and Android

We’ve all read about machine learning in the headlines, but many iOS and Android developers haven’t made the leap to integrating machine learning intelligence into their applications. This is partly due to the time commitment needed to learn enough statistics to understand the math behind the models, and to determine which models are appropriate for your use case.  Once a developer has this knowledge under their belt, they now have to move their trained model to production which requires a whole other set of skills, especially when it’s a deep learning algorithm that requires a GPU environment.

Between learning the algorithms and productionizing them for mobile devices, integrating ML into an application can seem like a daunting task. But there are big benefits to adding machine learning: you can take your mobile app from a basic CRUD architecture, to much more advanced uses:

  • adding nudity detection, to automatically filter out unsafe images
  • suggesting tailored content and products to users based on individual behavior
  • classifying images for a marketing campaign in real time

Fortunately, there is an easier way. You don’t have to be an expert in machine learning to take advantage of its benefits. And if you are an expert, you can host your models for free in our scalable, serverless AI cloud.

Veltrod provides over 4,000 algorithms to developers, usable from practically any programming language or framework, including iOS and Android. We serve up AI algorithms as scalable microservices that are available through a REST API, and you don’t have to build or maintain any servers — we do all the DevOps for you.

So, if you’re a mobile developer looking to integrate machine learning into your apps, check out our newly released guides for bringing deep learning into your iOS and Android applications:

If you are an Android developer who is interested in learning how to bring deep learning algorithms into your app, check out the Car Make and Model example where we show you how to take a picture of a car and run it through the Car Make and Model Recognition algorithm.  Then take a look at the Android development guide for a complete walk-through.

For iOS developers, check out how to apply filters to your images with our iOS Integration guide where we show how to use a deep learning algorithm called Deep Filter to transform your images into a work of art.

Veltrod is specialized in providing deep learning solutions across the domain. Write to for free consultation and quotes for business needs.


An Introduction to Deep Learning

Deep learning is impacting everything from healthcare to transportation to manufacturing, and more. Companies are turning to deep learning to solve hard problems, like speech recognition, object recognition, and machine translation.

One of the most impressive achievements this year was AlphaGo beating the best Go player in the world. With the victory, Go joins checkers, chess, othello, and Jeopardy as games machines have defeated human at.

While beating someone at a board game might not seem useful on the surface, this is a huge deal. Before the victory, Go was written off as a candidate for a competent AI. Due in part to the amount of human intuition necessary to play the game.  The victory makes an entire class of problems once considered intractable ripe for solving.

While it might seem like this technology is still years away, we are beginning to see commercial use. Such is the case with self-driving cars. Companies like Google, Tesla, and Uber are already testing autonomous cars on the streets.

What Is Deep Learning?

Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence.

Why is Deep Learning Important?

Computers have long had techniques for recognizing features inside of images. The results weren’t always great. Computer vision has been a main beneficiary of deep learning. Computer vision using deep learning now rivals humans on many image recognition tasks.

Speech recognition is a another area that’s felt deep learning’s impact. Spoken languages are so vast and ambiguous. Baidu – one of the leading search engines of China – has developed a voice recognition system that is faster and more accurate than humans at producing text on a mobile phone. In both English and Mandarin.

What is particularly fascinating, is that generalizing the two languages didn’t require much additional design effort: “Historically, people viewed Chinese and English as two vastly different languages, and so there was a need to design very different features,” Andrew Ng says, chief scientist at Baidu. “The learning algorithms are now so general that you can just learn.”

Open Source Deep Learning Frameworks

Deep learnings is made accessible by a number of open source projects. Some of the most popular technologies include, but are not limited to, Deeplearning4j , Theano, Torch, TensorFlow, and Caffe. The deciding factors on which one to use are the tech stack they target, and if they are low-level, academic, or application focused. Here’s an overview of each:


  • JVM-based
  • Distrubted
  • Integrates with Hadoop and Spark


  • Very popular in Academia
  • Fairly low level
  • Interfaced with via Python and Numpy


  • Lua based
  • In house versions used by Facebook and Twitter
  • Contains pretrained models

Veltrod is specialized in providing artificial intelligence solutions across the domain. Write to for free consultation and quotes for business needs.

Digital health and hospital blue background as vector illustration

Machine Learning Tools

Tools are a big part of machine learning and choosing the right tool can be as important as working with the best algorithms.

In this post you will take a closer look at machine learning tools. Discover why they are important and the types of tools that you could choose from.

Why Use Tools

Machine learning tools make applied machine learning faster, easier and more fun.

Faster: Good tools can automate each step in the applied machine learning process. This means that the time from ideas to results is greatly shortened. The alternative is that you have to implement each capability yourself. From scratch. This can take significantly longer than choosing a tool to use off the shelf.

Easier: You can spend your time choosing the good tools instead of researching and implementing techniques to implement. The alternative is that you have to be an expert in every step of the process in order to implement it. This requires research, deeper exercise in order to understand the techniques, and a higher level of engineering to ensure it is implemented efficiently.

Fun: There is a lower barrier for beginners to get good results. You can use the extra time to get better results or work on more projects. The alternative is that you will spend most of your time building your tools rather than on getting results.

When To Use Machine Learning Tools

Machine learning tools can save you time and help you consistency deliver good results across projects. Some examples of when you may get the most benefit from using machine learning tools include:

Getting Starting: When you are just getting started machine learning tools guide you through the process of delivering good results quickly and give you confidence to continue on with your next project.

Day-to-Day: When you need to get good results to a question quickly machine learning tools can allow you to focus on the specifics of your problem rather than on the depths of the techniques you need to use to get an answer.

Project Work: When you are working on a large project, machine learning tools can help you to prototype a solution, figure out the requirements and give you a template for the system that you may want to implement.

Machine Learning Platform

A machine learning platform provides capabilities to complete a machine learning project from beginning to end. Namely, some data analysis, data preparation, modeling and algorithm evaluation and selection.

Features of machine learning platforms are:

  • They provide capabilities required at each step in a machine learning project.
  • The interface may be graphical, command line, programming all of these or some combination.
  • They provide a lose coupling of features, requiring that you tie the pieces together for your specific project.
  • They are tailored for general purpose use and exploration rather than speed, scalability or accuracy.

Veltrod is specialized in providing artificial intelligence solutions across the domain. Write to for free consultation and quotes for business needs.


Essentials of Machine Learning Algorithms

Broadly, there are 3 types of Machine Learning Algorithms:

Supervised Learning

How it works: This algorithm consist of a target / outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). Using these set of variables, we generate a function that map inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy on the training data. Examples of Supervised Learning: Regression, Decision Tree, Random Forest, KNN, Logistic Regression etc.

 Unsupervised Learning

How it works: In this algorithm, we do not have any target or outcome variable to predict / estimate.  It is used for clustering population in different groups, which is widely used for segmenting customers in different groups for specific intervention. Examples of Unsupervised Learning: Apriori algorithm, K-means.

 Reinforcement Learning

How it works:  Using this algorithm, the machine is trained to make specific decisions. It works this way: the machine is exposed to an environment where it trains itself continually using trial and error. This machine learns from past experience and tries to capture the best possible knowledge to make accurate business decisions. Example of Reinforcement Learning: Markov Decision Process

List of Common Machine Learning Algorithms

Linear Regression

It is used to estimate real values (cost of houses, number of calls, total sales etc.) based on continuous variable(s). Here, we establish relationship between independent and dependent variables by fitting a best line. This best fit line is known as regression line and represented by a linear equation Y= a *X + b.

 Logistic Regression

Don’t get confused by its name! It is a classification not a regression algorithm. It is used to estimate discrete values based on given set of independent variable(s). In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function. Hence, it is also known as logit regression. Since, it predicts the probability, its output values lies between 0 and 1 (as expected).

Decision Tree

This is one of my favorite algorithm and I use it quite frequently. It is a type of supervised learning algorithm that is mostly used for classification problems. Surprisingly, it works for both categorical and continuous dependent variables. In this algorithm, we split the population into two or more homogeneous sets. This is done based on most significant attributes/ independent variables to make as distinct groups as possible.

SVM (Support Vector Machine)

It is a classification method. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate.

Naive Bayes

It is a classification technique based on Bayes’ theorem with an assumption of independence between predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, a naive Bayes classifier would consider all of these properties to independently contribute to the probability that this fruit is an apple.

Veltrod is specialized in providing artificial intelligence solutions across the domain. Write to for free consultation and quotes for business needs.


Machine Learning Algorithms Today

Machine Learning algorithms can predict patterns based on previous experiences. The overarching practice of Machine Learning includes both robotics (dealing with the real world) and the processing of data (the computer’s equivalent of thinking). These algorithms find predictable, repeatable patterns that can be applied to eCommerce, Data Management, and new technologies such as driverless cars. The full impact of Machine Learning is just starting to be felt, and may significantly alter the way products are created, and the way people earn a living.

Basic Algorithms

There are a variety of Machine Learning algorithms capable of assisting automated Data Modeling programs, and improving Data Management, eCommerce, and robotics. Listed below are some of the basic categories and related areas:

Deep Learning uses neural networks. Neural networks attempt to imitate how the human brain works. Interconnected “artificial” neurons are arranged in multiple processing layers (two is common with other Machine Learning systems). The additional processing layers provide higher-level abstractions, offering better classifications and more accurate predictions. Deep Learning is ideal for working with Big Data, voice recognition, and conversational skills.

Support Vector Machines (SVMs) are “supervised” learning models with appropriate learning algorithms. These algorithms analyze data, and are used for classification and for “regression analysis.” (Regression analysis uses statistics to estimate “the relationships among variables.”) It supports modeling and analyzing techniques using several variables, when the focus is on the relationship. More precisely, regression analysis helps in understanding how “criterion variables” change in value, when one of the independent variables change, while other independent variables remain fixed. SVMs are good at recognizing facial images and handwriting.

Probabilistic Models typically attempt to predict the best response by creating a model with a probability distribution. One of this model’s advantages is that it returns both the prediction, and the degree of certainty. A probabilistic model is meant to give a distribution of possible outcomes. It can describe all predicted outcomes and predict the probability of each. It is often used to provide “relevance” to search engine results.

Ensemble Learning algorithms act to combine the outputs from different Predictive Analytics models, and produce a “single” output. They are designed to help train “other” Machine Learning programs. Bootstrap Aggregating algorithms were the first effective Ensemble Learning algorithms. Bootstrap aggregating, also referred to as bagging, is an “Ensemble meta-algorithm” for Machine Learning, created to promote the accuracy and stability of programs used in regression and statistical classification. Although bagging is normally used with decision tree methods, it is adaptable to any type of Machine Learning method.

eCommerce Applications

Machine Learning algorithms are transforming major portions of the economy, altering how everything from online product marketing to customized search engines, and from self-driving cars to advanced medical imaging. The use of Machine Learning is broadening to include all aspects of eCommerce, the Internet, and technology. Consider the following:

Improved Customer Segmentation:

Customer segmentation slots people into categories based on the patterns of their behavior. By identifying patterns among customers, and potential customers, e-commerce retailers can maximize their profits. Machine Learning algorithms can provide the information necessary for identifying new customers, and the opportunity to target specific customers with advertising.

Trend Forecasting and Analytics:

There is also an algorithm designed for analyzing and forecasting trends for online retailers. Prior to Machine Learning and Big Data, online retailers often experienced severe and chaotic shifts in fashion and trends. Purchased inventory would simply sit in storage, wasting investment capital and reducing profits. Currently, eCommerce merchants analyze and interpret as much data as possible, trying to anticipate the shifting tides of trends and fashion.

Fraud Detection and Prevention:

The amount of Internet payment fraud is increasing constantly, and, for the moment, seems to be unstoppable. Not too surprisingly, there has been a steady increase in e-commerce fraud each year, since 1993, with a 19% increase compared to 2013. For every $100 of product sold, fraudsters steal 5.65 cents.

Veltrod is specialized in providing artificial intelligence solutions across the domain. Write to for free consultation and quotes for business needs.


How Ecommerce Brands can prepare for Artificial Intelligence

There is no question that Artificial Intelligence (AI) and other technological advances such as virtual reality or augmented reality are taking the marketing world by storm.

As these technological trends grow in both relevance and importance in the coming months, it is important that e-commerce brands prepare. This preparation means not only understanding what AI is, but also what kind of innovations are going to increase business and productivity for e-commerce brands.

Below outlines a few of these tips to help make sure you’re on the right path.

You Need To Get Innovative

At the heart of e-commerce best practices is innovation. There is no shortage of how retailers have taken up the concept of technology.

Look at Kate Spade’s Everpurse, the smartphone charging handbag that is both fashion-forward and functional. This example captures true innovation with technology. It took an item that most women use every day and made it even more functional.

Now, Artificial Intelligence is, of course, a little different, but the concept still applies for innovation in the e-commerce world.

Artificial Intelligence is, by definition, “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Using this understanding of AI, we can begin to see how innovation and bringing technological intervention into the regular tasks of users on your e-commerce site can begin to make your brand stand out and be even more useful to your audience.

E-commerce brands and AI Considerations

There are so many ways that AI is starting to become incorporated into the e-commerce world, and we can anticipate many more ways to come in the near future.

Below are some examples to consider as you think about your own brand and begin to strategize incorporating AI into your own platform:

  1. Visual Search

Have you ever been at someone’s home, an office space or somewhere else in the world and thought—“I wonder how much that is?” or “I wonder where I can find that?” Okay, I am not suggesting you start getting the retail value of your mother-in-laws sofa, but understandably, sometimes we see things and that sparks an idea for our own lives.

Visual Search allows users to snap a photo of something in their physical world and search for it, ultimately allowing the physical world to cross over into the visual.

There are several ways this is being implemented with search, but e-commerce is truly the most applicable field that this development with benefit.

  1. Give Your Clients a Personal Shopper

One unique way to conceptualize AI is by designing user experience around proactive personal assistance rather than a simple passive search on your e-commerce site.

Having bots as personal shoppers is one way to enhance a user’s experience on your site and give them the impression that they have a unique personal shopping experience as they would in a store.

  1. Reaching “Generation Z”

This cohort, along with millennials, are looking for more experiential and interactive ways of shopping, both in-store and online. 80% of them more likely to visit a store that offers entertainment, and 80% saying the same about stores offering VR and AR technology. Meanwhile, 79% are more likely to visit stores that offer interactive experiences that help customize products too.

  1. Balance Between Privacy and Trust

From suggested products with Google Search Ads to integrated calendars and traffic alerts, people can really feel as though they are getting a unique user experience and assistance from their technology. That being said, it is going to remain important for e-commerce customers to make privacy and trust a priority for their visitors, as many are starting to be wary of privacy breeches and invasions into personal data.

  1. Chat Bots

We all know that chat platforms have become increasingly important for business, and especially e-commerce websites. Chat bots can allow for easier navigation on your site, answer users’ immediate questions and provide recommendations based on brief surveys or past visits to your site.

The key here is that the chat will feel even more like a human experience for the customer and improve your customer service on all fronts since AI can be even more available for any immediate needs that come up on your site.

Veltrod is specialized in providing artificial intelligence solutions across the domain. Write to for free consultation and quotes for business needs.

© Copyright 2013 Veltrod Scroll Top