Overpromised and Saturated?! -  Positioning AI in IP

Dr. Matthias Pötzl is Managing Director of Dennemeyer Octimine. Dennemeyer Octimine uses Natural Language Processing (NLP) methods, machine learning and artificial intelligence algorithms to analyze and compare millions of scientific and technical text documents (e.g. patents, scientific publications, scientific news,…) in seconds to retrieve the relevant information much faster than common methods. Founded in 2015 as a spin-off from the University of Munich (LMU) and the Max-Planck-Institute, the company is now part of the Dennemeyer IP Group.

The main product octimine is a semantic patent search engine (SaaS) that makes patent search easy and fast. There is no need for specialized knowledge about Boolean search operators or technology classes to perform prior art or freedom to operate (FTO) searches.

In this day and age, is it possible to go to an Intellectual Property (IP) conference or trade show and not be bombarded with the buzzword "artificial intelligence (AI)?" Is it possible to talk about modern IP management and its future without discussing AI? Why are we starting to feel so oversaturated with the topic of AI?

In looking at the presentations, products and companies at popular IP trade shows and conferences as well as the latest blog posts of IP service providers, one might think to be part of an IP world that has embraced and implemented AI wholeheartedly and without complication.
 

Attitudes toward AI proliferation

In this context, two recurring issues emerge. One concerns the expectations of what AI can actually do and will do in the future. The other relates to the divergent, sometimes subjective definitions and interpretations of what AI is. We, the providers of this technology, are not always entirely blameless in this confusion. Statements that an "AI thinks like an IP professional" or that an "AI is able to understand millions of patent texts" are misleading or simply wrong.

From these different presumptions and baselines of common understanding, three broad schools of thought develop.

While one group is afraid of AI fully automating jobs performed by humans, a second standpoint looks forward to an AI-driven future. Those adhering to the latter opinion look to AI-based applications less as a threat than as assistance in conducting their professions' painful and tedious aspects. Finally, there is a last group which does not care at all, thinking that AI does not significantly impact their professional or private lives.
 

The nature and disciplines of AI

One of the pioneer scholars of AI, John McCarthy, stated ‘AI is the science and engineering of making intelligent machines’. This concept is very far-reaching and covers a large set of algorithms ranging from simple rule-based processes and static ontologies to the latest neural networks and deep learning models. Unfortunately, in practice, this definition leads to many technologies being referred to as AI when the user rather understands them as machine learning.

"Machine learning is a more concrete and focused field of study that aims at giving computers the ability to learn without being explicitly programmed,"- Arthur Samuel, 1959. As an example, a machine-learning algorithm to detect cats in pictures does not contain thousands of explicit rules to describe the characteristics of cats. Instead, it just needs a mathematical model of the learning process and labeled training data, i.e., many pictures of cats and other animals. Based on these two elements, the machine will extract and learn the characteristics of a cat by itself.

One of the most popular and research-intensive subfields of machine learning is deep learning.
Deep learning, inspired by the human brain, is a set of algorithms in machine learning that attempts to
learn in multiple levels, corresponding to various levels of abstraction (Deng & Yu 2014). In most cases, when people talk about neural networks, they are actually talking about deep-learning-based neural networks with many hidden layers.

In simplified terms, AI is the overarching field. Machine learning is a subset of AI, and deep learning, which has become the most popular research domain, is again a subset of machine learning and is the narrowest of the three concepts.
 

Recent developments in AI and machine learning

John McCarthy very elegantly summarized the current state of the art: "As soon as it works, no one calls it AI anymore." Some examples? Think about the spam folder of your email account or your recommendations while shopping online or using Google Maps. These applications are already well established, meaning a lot of AI technology is already integrated into our daily life. On reflection, this statement also emphasizes that AI cannot solve everything as of today; moreover, it is still questionable
whether AI will reach the same level of intelligence as humans in the long run.
 

The necessity of good data

Based on our long experience – now more than 10 years – in applying AI technologies to IP data, we can say that the quantity and quality of input data is crucial for machine learning. The data quality is especially significant when leveraged as ‘training data’ for a new tool or algorithm. In our context, multilingualism and the quality of machine translations have often been a problem for the accuracy of the analyses in the past. In the last 3-5 years, a lot has happened in this area. In order to provide AI applications at the highest level, it takes a data provider like IFI CLAIMS. IFI CLAIMS has invested heavily in the quality of its data and machine translations, making it easier to apply AI to patent data.

So, what does all this information mean for the use of AI in IP? Is AI in IP maybe overpromised and oversold? If you want to know the answers to these questions, follow us on LinkedIn and join our numerous webinars and talks on this topic.