07.05.2020

Machine learning and artificial intelligence – pure hype (?) Part 3

Welcome to the last part of our series on "Machine Learning - all just hype? If you missed the last two parts, you can find part 1 here and part 2 under this link.

Finally, in this post we will deal with concrete practical examples.

4 Three examples of use

Now to specific examples of use. I will show you three examples from our portfolio to give you some ideas and have divided them into three different categories. Firstly, I will use the example of a traditional duplicate payments analysis to explain how known, transaction-based analyses can be improved. I will then describe new analytical approaches, based on the specific example of a comparison of master data quality expressed as an objectively determined indicator. The third example shows the segmenting of business partners in financial accounting – not in the traditional way but based on the posting structure exhibited by these partners.

 

4.1 Old wine in new bottles

One use of traditional transaction-based analysis is to identify potential duplicate payments – in other words which invoices or credit notes have inadvertently been posted and therefore also paid multiple times. The approaches described above permit improvements in two areas:

  1. Fuzzy analytics logic
  2. Fewer false positives

At first glance, fuzzy analytics logic does not seem like an improvement. We are referring here to a situation where data derivatives can be used as a basis to determine not only exact duplicate postings but also postings which, while not identical, are similar but based on the same business transaction and which are therefore genuine duplicate payments. The possibilities that are offered here go further than the usual methods (amount +/- tolerance value, variances in individual signs in the posting text, etc.).

Generating fewer false positives, on the other hand, is an obvious benefit. The important thing here is that firstly, a fuzzy analysis is performed, so that as well as finding obvious duplicate payments that may already be prevented by the SAP system itself, less obvious and more complexly structured duplicate payments can also be identified. This in turn means that with conventional methods, you often have to work through long lists of potential incorrect postings of which a high percentage prove irrelevant. Our innovative approach involves using machine learning to identify patterns in known duplicate payments. If transactions are similar to this pattern in future, our software recognises this and sounds an alarm. Due to their high level of complexity, the specified patterns cannot generally be expressed through rules such as “if invoice amount is more than €1250.49 and company code is 0001 and item text is empty = duplicate payment”. As mentioned above, transactions are even identified that are only similar to known patterns. This process can best be demonstrated using the example mentioned in the introduction of the photo collage that was automatically generated by my mobile phone. The same face that corresponds to a pattern comprising the position and size of eyes, ears, nose, etc. is recognised on different photographs. The fact that this face changes over time or that the angles of the photographs differ is compensated for by this fuzzy analytics logic. By avoiding rule-based analyses our learning process thus adapts to time-based changes in duplicate payments; the identified patterns are therefore never obsolete.

 

4.2 A new approach

The quality of your supplier master data expressed in one indicator – sounds good, doesn't it? As well as describing the quality of your master data with a meaningful indicator, our analytics approaches also allow you to make comparisons – between systems within the company or in the context of benchmarking processes with other companies or reference datasets. For example, you can directly identify where problems lie in terms of data quality, and also continuously monitor the indicators – after all, the volume of data grows incessantly through data capture, system migrations and consolidations.

Or, in another example, imagine that your group has implemented global standardisation of accounting processes. In theory. You now want to check the processes for uniformity, but without uploading complicated reference processes in a process mining tool, generating event logs and then comparing different process graphs that look like a spaghetti monster. Here too, you can easily determine the structure of accounting processes for each sub-group, based on data derivatives, in KPIs that can then be compared. Transforming processes into figures firstly enables comparisons to be made, and secondly allows processes that disrupt uniformity to be identified as anomalies.

 

4.3 Doing things differently

You have undoubtedly already segmented your customers and suppliers, for example in the traditional way using an A-B-C classification or simply using an existing allocated customer or supplier group that is in essence already defined as an attribute in the master data. Now, however, it would be interesting to segment apparently homogeneous groups (e.g. all domestic raw material suppliers within segment A, B or C), again based on the digitally mapped business relationship (e.g. purely on the accounting data, or starting from purchase order transactions, incoming goods and invoices received to outgoing payments). Will clusters be identified within this apparently homogeneous group that you would not expect here? Admittedly, this is then tending more towards an explorative analysis and you therefore need to consider the identified groups in a different way to that for the analyses described in 4.1 and 4.2. Nevertheless, these approaches which prepare seemingly known aspects in a different way are interesting, and can guide you from simple business insight to fascinating findings in the field of fraud detection.

4 Summary

This blog post has considered various topics, some briefly and some in slightly more depth. If you are involved with data analytics (e.g. if you already have experience in this field), I hope you will find points of relevance to your existing approaches. If you have only just started to address this issue, or are embarking on new methods such as machine learning, you may find it a small practical guide on how to become more familiar with the topic (identifying suitable data, using the correct methods for transformation and then using the appropriate algorithms with clear, achievable goals in mind). Numerous technical aids are available elsewhere – our focus in this blog is solely on the methodical approach. If you are a customer of ours (or would like to become one, which would of course delight us), you will have clearly seen that reflecting our vision of “making data analytics work for you”, we also include the current hot topics of artificial intelligence and machine learning in our portfolio. We are also ideally placed in terms of personnel, and have expanded our team to include data scientists. We help complete the circle between traditional data analytics and new methods, and in so doing offer our customers an integrated solution on their journey towards digitalisation.


Comments (0)
Be the first who comments this blog entry.
Blog login

You are not logged in. Please log in to comment this blog entry.

go to Login