Which tool(s) or systems do you use for processing data?
We use existing open source tools, such as ElasticSearch combined with the Kibana visualisation tool, Python with the scikit-learn library, as well as noSQL and Redis.
Analytical solution to specifications/actual technology
We can deliver a custom-built solution – from the minimum scope of a service that is accessed via API up to full back end (API or database) and front end integration, plus we are also able to cover the development of web-based and mobile applications through working with partners that we have collaborated with from the beginning. We know how to use the functionalities of cloud services – MS Azure, AWS, Google services, etc.
We focus on analytical as opposed to technical solutions, we work extensively with IoT infrastructure (UP boards, Movidius cards…)
We deliver to our customers end-to-end solutions and technologies including operations
Implementation of tailor-made projects for customers in the fields of artificial intelligence and data science
How we work with data
Generally speaking, model training and its scoring can be distinguished. In model training, a set of historical data needs to be set aside, transformed into a form that is suitable for modelling and subsequently the model must be trained. The actual scoring then can be done using real data, which also need to be transformed (either in a database, or directly in scoring the model)
Collecting data from various sources and performing complex mathematical operations needed to synchronise data collected in various periods of time. For cloud-stored data, for transfers outside the cloud fees are charged based on the amounts of the transferred data, meaning that data transfer outside the cloud leads to additional financial costs. One possible solution is to place the analysis as well in the cloud where our system is located, however for this the possibilities need to be understood in terms of their abilities to mine, e.g. pre-processed and aggregated data.
Our solution can be used in any cloud platform (the models run as microservices). As model inputs, typically data apply that require (scalable) preprocessing.
Preprocessing and fitting the model in our approach therefore are designed as separate tasks. Preprocessing can be done either over our data platform, or over any other data platform (including your system as well).
The analytical solution should be able to handle the demanding transfer of synchronised data collected from various sources in various periods of time, so that it is necessary to have a clear and transparent set of data they can work with.
Types of data analyses we offer
Typically, we solve demanding tasks in the fields of image and sound processing, natural language processing (NLP) and data science.
The specific analyses in each case always depend on the particular topic at hand –data classification, clustering, predictive analysis methods and other methods such as deep learning (using neural networks) depending on the scope.
Machine learning (deep learning as the next advanced level of machine learning), predictive modelling and descriptive modelling
We deal with a relative broad range of problems – predictive models, detection of anomalies, condition monitoring and predictive maintenance, optimisation, behavioural analysis, and image processing (both deep learning and classic computer vision (CV)). Frequently we deal with a number of different tasks using various machine learning techniques to optimise a single comprehensive process. We do not focus on text mining, natural language processing, etc.
Machine Learning Alghorithms
Get in touch
Come and visit our quarters or simply send us an email anytime you want. We are open to all suggestions from our audience.