Machine Learning - Key tool for disruptive digital transformation

By Christian Setzwein November 26, 2020

The world has changed significantly and at an incredible pace in recent decades due to digitalization, and it will continue to do so. Many companies are currently undergoing digital transformation, or their clever minds are considering how they can start this change within themselves.

The key factors of this digital transformation are:

Data is available digitally on data carriers (stick, CD, SSD, server, cloud) and is no longer stored on paper, in people’s heads or not stored at all. Data is no longer stored locally on a computer, but can be accessed in real time and without delay through networks and sensors. This is done primarily via the internet. Thanks to the 5G standard for mobile systems, this will also apply to moving objects such as cars, drones, robots or parts in a factory in the near future. Data can be processed by programs. These programs are written by people (software engineers) – it is no coincidence that experts in this field are desperately sought after.

These three techniques have solved countless problems, here are just a few examples:

People can access their bank accounts from home. People can buy products in-store or online.

  • Machinery can be controlled by digital systems instead of paper and human shouting.

The digital solutions to the problems are all based on the fact that people have written programs that make it possible for people (users) to handle the data and have implemented the problem-specific processing steps.

For the strategic thinker, two obvious questions arise at this point:

  1. Are there problems that cannot be solved in the way described?
  2. Are there problems that could be solved better in a different way?

I would like to illustrate the answers to these questions using an example:

Let’s assume we want to write a program that can identify whether there is a cat in a photo. This software could then be used to control the opening of a cat flap. A program written by a programmer would have to contain instructions of the type “if isCatEye(search_eye) and distance(position(search_eye), position((search_ear)) < 5 cm”.

The following difficulties arise for the programmer of the cat detection:

The distance query is only possible with a certain size of the image. A separate program code would have to be written for all distances of the cat from the camera. An obstacle could be in front of the eye, such as a flying leaf. The program code would then not work. The function “search eye” alone would have to search all the pixels in the image. The programmer would have to program code for all color constellations, light and shadow constellations, pupil sizes and sizes of the eye in the image. The amount of program code for this endeavor would be huge. For parts of the cat such as the eyes, nose, mouth, ears, body, fur, and tail, it would be necessary to check whether the positions, distances, and proportions match for all views (front, back, side) and forms of movement (jumping, creeping, walking, and all intermediate forms). How exactly could you distinguish a cat’s eye from a rat’s eye in a certain constellation for all cat breeds in every case?

You wouldn’t want to ask a programmer to do this, an almost infinite amount of program code could perhaps solve the task approximately, but it would be very prone to error. Some constellation would definitely not be considered in the program with the fixed queries.

With machine learning, the approach is completely different. What a cat is, what makes it a cat, is not described by program code. No programmer is at work, but a machine that learns. No programmer defines distinctions such as mouth, eye or cat ears. Nor does the machine initially distinguish between distance, position, light, color, size.

Instead, the machine is trained with a million pictures of cats (and the machine is told: there is a cat in this picture) and another million pictures of dogs, rats, birds or other things (and the machine is told: there is no cat in this picture). The supervised learning algorithm autonomously develops patterns and distinctions without human input that help it to recognize cats in pictures as well as a human (human-like performance). The recognition criteria can resemble human perception (e.g. outlines, colors,..), but they don’t have to. Fixed program code is replaced by probability patterns.

Now back to the questions and the corresponding answers for the strategic thinker:

  1. Yes, there are problems that cannot be solved with conventional programming, but rather with machine learning approaches.
  2. Yes, there are problems that have been solved conventionally with programming so far, but that can be solved better with machine learning approaches.

As for the second answer, we recall the seemingly artificially intelligent expert systems of the 1980s. Here, the expert answers were hard-coded according to rules. The approach was correspondingly limited. Chat solutions were also originally hard-coded and limited. Today’s chatbots, which find answers from the right training data, are better by classes because the “most intelligent” answer is not hard-coded but determined from statistical patterns.

So, with the help of machine learning approaches, we can solve new problems and solve some problems better than with conventional programming. We no longer have to program to find solutions; the algorithms program themselves. For the product managers of this world, a similarly powerful tool is available as was the case with the advent of the internet. For me, machine learning is therefore the key tool for further disruptive digital transformations,** and it will further increase the speed of change.

I would like to help everyone on the good side of the force to use this powerful tool in the right places and to find the (new) problems and solutions. Feel free to contact me, there are also new tools for this to

comprehensively understand currently known application scenarios of machine learning strategically derive use cases for your own areas of application develop selected approaches in a focused way (prototyping) and validate them with colleagues and customers.

I hope you continue to enjoy your introduction to the world of machine learning. Suggestions and comments are very welcome.

f