What cities should learn from eCommerce
The data move the world. And it looks like the one who learns to use them will be able to take control of it once. The data and all the new academic disciplines growing around them aren’t only another buzzwords connected to the digital era. They are at the top of the food chain. The cities should learn to feed themselves from it as well.
The increased interest in the topic of data generation, collection and analysis are related to the massive development of eCommerce and social media. The vendors of these products and services, who started to carry out on the Internet, were able to recognize very quickly the patterns of customers behavior thanks to the digital traces left behind on their websites. And what is more important, they learned to read them and use them not only to raise their conversion rate but also to deliver more valuable services and build durable relationships with their customers.
Do it like Amazon
Nowadays the debate about Smart Cities brings forward the necessity to unleash the discussion about data collection and analysis. The cities can pursue the success stories of the best-performing companies. Thanks to the processed data from open platforms they can not only gather information about the needs of the city but also learn more about the wishes and demands of its citizens.
If they follow the language of the data, they can build an effective and functional city with high quality of life for everyone. Getting inspired will not be that demanding. The citizens need to be considered as paying customers who have the right to choose their preferable e-shop. If you offer them transparent services for affordable prices, meet the delivery times, surprise them with a loyalty discount or just add something extra to the order, he will keep coming. The increased mobility of the young people is a signal that in the near future the cities will have to court the citizen in the same way the seller courts a customer.
Harvest and analyse
The volume of free data in the city area is gigantic. I am the one who generates it – with my mobile phone, smart watches, car navigation, every functional pacemaker, insulin pump, every sensor for door opening, security camera, every single electronic device. Let’s multiply this with a real number of people and functional devices in the area of the city and we get the idea of how many data we are dealing with 24 hours a day, 7 days a week during our whole life. The role of the infrastructure and data processing platform in a smart city doesn’t consist of random collecting and storage of this data. The main role dwells in their analysis and following-up data-based suggestions for traffic, security or healthcare optimization. They can also be used to regulate the noise or VOC pollution in vulnerable city quarters.
With today‘s digitalization speed, civilization produces 2.5 exabytes of data each day. With IoT, the speed of data generation increases. 90% of all data in the world has been created over the last two years.
In every smart city, a data collection often contains some data, which is currently neither analyzed nor processed. Its use, though, might be effective in the future. You also have to consider, that such garbage data might be useless only from my point of view and that the same data viewed with different optics are likely to be found meaningful. Exactly, for this reason, it is very important to avoid to look upon the data as my own property and stay open to data sharing with the third party subjects. Combined with the data of the third party subjects we can find the data version of the
We are, if necessary in posses of a sufficient amount of data. The final product of data processing – i.e. the information – remains, nevertheless, the most important topic. If I may say so, it is exactly what a Smart City or Smart Industry has to generate as its biggest benefit. It might sound easy, but it is not. The task is very complex and represents quite a challenge for data engineers. Their work is a multidisciplinary area which combines data visualization, data analyses, data engineering, databases and, of course, delivery of data to the customer. The data engineers work as plumbers of a Big Data “pools”, putting together the pipelines and filters. Only after the work is properly done, they can source out the relevant data from these “pools” for their analyses and transform it into the final information for the customer.
The materialized intuition
Every existing city and its citizens are aware of their own neuralgic points. Some of them are bothered by traffic density, low security, inconvenient placement of bus shelters or some-thing else, often difficult to grasp. If the city or its dissatisfied citizens continue to justify the measures taken or motions raised only through their own optics, everyone will consider them subjective and it will be hard to sustain them. On the other hand, if the requests for measures and motions will be based on data, they will represent an objective foundation for the change.
5 basic life stages of data
Data is generated around us, whether we want it or not. Data is generated by devices, sensors, software. It can also be a result of secondary data processing. In the context of the contemporary data pollution, its quality or usability is often overlooked or ignored, and in some cases, their very presence is repressed (e.g. SPAM). We simply prefer data to be generated as planned and with a clear purpose of further processing.
Today and in the days to come preprocessing of generated data is inevitable. The rumor has it, that by the end of 2023 the IoT technology will generate 163 ZB of data – an incredible number from a present-day perspective. Such a big amount of data could be transferred today in a reasonable time only by using high-quality data networks. This shows us that, from an economic and technological point of view, data preprocessing represents a big advantage. It seems to be more effective to collect, transfer and store data which has already been preprocessed.
Data collection and storage is an inevitable premise to its further processing, analysis, interpretation, and visualization. The collected data represents a good ground for its further processing.
Data processing can be in general considered a targeted manipulation with data in order to obtain a piece of meaningful information. This is something what we – knowingly or not – already do on a daily basis in ERP systems where managers evaluate their effectivity, accountants analyze their data or in the weather forecast, which is nothing more than a transformation of collected data into a hypothesis.
Data visualization is the final product of previous phases and the result of targeted data collection. It is the most friendly form for the user to understand the generated data. With smart solutions and their complexity, this is even a must. Data visualization is perceived by many disciplines as the modern equivalent of visual communication. It has become an independent branch of science and involves the creation and study of visual data display. For intelligible and effective communication of information, data visualization uses statistical graphics, charts, information graphics, and other tools. This is something where numerical data can be encoded using dots, lines, and bars, to communicate visually the quantitative messages. Efficient visualization helps users analyze and justify data and evidence. It supports the understandability and usability of complex data. Data visualization can basically be considered both art and science.
Author: Radovan Slíž, CitySys